Test Report: Hyper-V_Windows master

                    
                      b522747fea7d12101d906a75c46b71d9d9f96e61:2023-02-19:27963
                    
                

Test fail (6/292)

Order failed test Duration
196 TestMultiNode/serial/PingHostFrom2Pods 39.41
202 TestMultiNode/serial/RestartKeepsNodes 348.9
216 TestRunningBinaryUpgrade 429.7
233 TestNoKubernetes/serial/StartWithK8s 336.22
235 TestStoppedBinaryUpgrade/Upgrade 360.88
245 TestPause/serial/SecondStartNoReconfiguration 234.97
x
+
TestMultiNode/serial/PingHostFrom2Pods (39.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-brhr9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-brhr9 -- sh -c "ping -c 1 172.28.240.1"
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-brhr9 -- sh -c "ping -c 1 172.28.240.1": exit status 1 (10.5943976s)

                                                
                                                
-- stdout --
	PING 172.28.240.1 (172.28.240.1): 56 data bytes
	
	--- 172.28.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:555: Failed to ping host (172.28.240.1) from pod (busybox-6b86dd6d48-brhr9): exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-xg2wx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-xg2wx -- sh -c "ping -c 1 172.28.240.1"
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-xg2wx -- sh -c "ping -c 1 172.28.240.1": exit status 1 (10.6501755s)

                                                
                                                
-- stdout --
	PING 172.28.240.1 (172.28.240.1): 56 data bytes
	
	--- 172.28.240.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:555: Failed to ping host (172.28.240.1) from pod (busybox-6b86dd6d48-xg2wx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-657900 -n multinode-657900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-657900 -n multinode-657900: (5.2133163s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 logs -n 25: (4.4943045s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-208200 ssh -- ls                    | mount-start-2-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:56 GMT | 19 Feb 23 03:56 GMT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-208200                           | mount-start-1-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:56 GMT | 19 Feb 23 03:56 GMT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-208200 ssh -- ls                    | mount-start-2-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:56 GMT | 19 Feb 23 03:56 GMT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-208200                           | mount-start-2-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:56 GMT | 19 Feb 23 03:57 GMT |
	| start   | -p mount-start-2-208200                           | mount-start-2-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:57 GMT | 19 Feb 23 03:58 GMT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:58 GMT |                     |
	|         | --profile mount-start-2-208200 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-208200 ssh -- ls                    | mount-start-2-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:58 GMT | 19 Feb 23 03:58 GMT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-208200                           | mount-start-2-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:58 GMT | 19 Feb 23 03:58 GMT |
	| delete  | -p mount-start-1-208200                           | mount-start-1-208200 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:58 GMT | 19 Feb 23 03:58 GMT |
	| start   | -p multinode-657900                               | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:58 GMT | 19 Feb 23 04:02 GMT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- apply -f                   | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- rollout                    | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- get pods -o                | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- get pods -o                | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | busybox-6b86dd6d48-brhr9 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | busybox-6b86dd6d48-xg2wx --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | busybox-6b86dd6d48-brhr9 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | busybox-6b86dd6d48-xg2wx --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | busybox-6b86dd6d48-brhr9 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:02 GMT | 19 Feb 23 04:02 GMT |
	|         | busybox-6b86dd6d48-xg2wx -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- get pods -o                | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:03 GMT | 19 Feb 23 04:03 GMT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:03 GMT | 19 Feb 23 04:03 GMT |
	|         | busybox-6b86dd6d48-brhr9                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:03 GMT |                     |
	|         | busybox-6b86dd6d48-brhr9 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.240.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:03 GMT | 19 Feb 23 04:03 GMT |
	|         | busybox-6b86dd6d48-xg2wx                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-657900 -- exec                       | multinode-657900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:03 GMT |                     |
	|         | busybox-6b86dd6d48-xg2wx -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.240.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 03:58:27
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 03:58:27.639726    8476 out.go:296] Setting OutFile to fd 836 ...
	I0219 03:58:27.700924    8476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:58:27.700924    8476 out.go:309] Setting ErrFile to fd 964...
	I0219 03:58:27.700924    8476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:58:27.718285    8476 out.go:303] Setting JSON to false
	I0219 03:58:27.721718    8476 start.go:125] hostinfo: {"hostname":"minikube1","uptime":16097,"bootTime":1676763010,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 03:58:27.721779    8476 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 03:58:27.728757    8476 out.go:177] * [multinode-657900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 03:58:27.733313    8476 notify.go:220] Checking for updates...
	I0219 03:58:27.737081    8476 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 03:58:27.740813    8476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 03:58:27.743124    8476 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 03:58:27.746105    8476 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 03:58:27.748807    8476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 03:58:27.751155    8476 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 03:58:29.368808    8476 out.go:177] * Using the hyperv driver based on user configuration
	I0219 03:58:29.371031    8476 start.go:296] selected driver: hyperv
	I0219 03:58:29.371031    8476 start.go:857] validating driver "hyperv" against <nil>
	I0219 03:58:29.371031    8476 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 03:58:29.418462    8476 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0219 03:58:29.419844    8476 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0219 03:58:29.419973    8476 cni.go:84] Creating CNI manager for ""
	I0219 03:58:29.419973    8476 cni.go:136] 0 nodes found, recommending kindnet
	I0219 03:58:29.419973    8476 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0219 03:58:29.420034    8476 start_flags.go:319] config:
	{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 03:58:29.420141    8476 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 03:58:29.424576    8476 out.go:177] * Starting control plane node multinode-657900 in cluster multinode-657900
	I0219 03:58:29.427308    8476 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 03:58:29.427542    8476 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0219 03:58:29.427630    8476 cache.go:57] Caching tarball of preloaded images
	I0219 03:58:29.428025    8476 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 03:58:29.428025    8476 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 03:58:29.428590    8476 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 03:58:29.428703    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json: {Name:mk02aca41cda802cff50a108ebd3fc9825d74e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:58:29.429354    8476 cache.go:193] Successfully downloaded all kic artifacts
	I0219 03:58:29.429886    8476 start.go:364] acquiring machines lock for multinode-657900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 03:58:29.429972    8476 start.go:368] acquired machines lock for "multinode-657900" in 36.8µs
	I0219 03:58:29.429972    8476 start.go:93] Provisioning new machine with config: &{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 03:58:29.429972    8476 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 03:58:29.431959    8476 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0219 03:58:29.432996    8476 start.go:159] libmachine.API.Create for "multinode-657900" (driver="hyperv")
	I0219 03:58:29.432996    8476 client.go:168] LocalClient.Create starting
	I0219 03:58:29.432996    8476 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 03:58:29.432996    8476 main.go:141] libmachine: Decoding PEM data...
	I0219 03:58:29.432996    8476 main.go:141] libmachine: Parsing certificate...
	I0219 03:58:29.434168    8476 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 03:58:29.434168    8476 main.go:141] libmachine: Decoding PEM data...
	I0219 03:58:29.434168    8476 main.go:141] libmachine: Parsing certificate...
	I0219 03:58:29.434168    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 03:58:29.858283    8476 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 03:58:29.858283    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:29.858514    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 03:58:30.478898    8476 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 03:58:30.478898    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:30.478898    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 03:58:30.988476    8476 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 03:58:30.988735    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:30.988819    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 03:58:32.434797    8476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 03:58:32.434797    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:32.437796    8476 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 03:58:32.823601    8476 main.go:141] libmachine: Creating SSH key...
	I0219 03:58:33.036173    8476 main.go:141] libmachine: Creating VM...
	I0219 03:58:33.036173    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 03:58:34.366502    8476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 03:58:34.366502    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:34.366502    8476 main.go:141] libmachine: Using switch "Default Switch"
	I0219 03:58:34.366502    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 03:58:35.003166    8476 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 03:58:35.003401    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:35.003401    8476 main.go:141] libmachine: Creating VHD
	I0219 03:58:35.003493    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 03:58:36.678441    8476 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3936BD02-42D5-4511-84FE-E3FB628F1A4C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 03:58:36.678765    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:36.678765    8476 main.go:141] libmachine: Writing magic tar header
	I0219 03:58:36.678824    8476 main.go:141] libmachine: Writing SSH key tar header
	I0219 03:58:36.691217    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 03:58:38.404657    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:38.404657    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:38.404765    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\disk.vhd' -SizeBytes 20000MB
	I0219 03:58:39.758901    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:39.759137    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:39.759362    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-657900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0219 03:58:41.672787    8476 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-657900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 03:58:41.672787    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:41.672787    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-657900 -DynamicMemoryEnabled $false
	I0219 03:58:42.491897    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:42.491897    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:42.491897    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-657900 -Count 2
	I0219 03:58:43.284279    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:43.284363    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:43.284471    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-657900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\boot2docker.iso'
	I0219 03:58:44.415649    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:44.415649    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:44.415714    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-657900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\disk.vhd'
	I0219 03:58:45.653460    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:45.653460    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:45.653460    8476 main.go:141] libmachine: Starting VM...
	I0219 03:58:45.653541    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-657900
	I0219 03:58:47.370424    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:47.370488    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:47.370488    8476 main.go:141] libmachine: Waiting for host to start...
	I0219 03:58:47.370488    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:58:48.115413    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:58:48.115634    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:48.115634    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:58:49.158293    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:49.158293    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:50.162125    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:58:50.892037    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:58:50.892037    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:50.892037    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:58:51.912122    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:51.912403    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:52.914482    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:58:53.663598    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:58:53.663813    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:53.663890    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:58:54.634264    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:54.634421    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:55.637183    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:58:56.336822    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:58:56.336822    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:56.337008    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:58:57.303166    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:57.303203    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:58.304938    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:58:58.985369    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:58:58.985369    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:58:58.985369    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:58:59.954060    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:58:59.954131    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:00.957950    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:01.642765    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:01.642765    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:01.642878    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:02.610797    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:59:02.610797    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:03.626200    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:04.312955    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:04.313008    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:04.313072    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:05.311403    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:59:05.311465    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:06.314583    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:07.001829    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:07.001829    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:07.001919    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:07.999694    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:59:07.999694    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:09.003039    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:09.707073    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:09.707073    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:09.707073    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:10.710791    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 03:59:10.710940    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:11.714349    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:12.427585    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:12.427848    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:12.427848    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:13.489013    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:13.489079    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:13.489079    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:14.203638    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:14.203638    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:14.203736    8476 machine.go:88] provisioning docker machine ...
	I0219 03:59:14.203797    8476 buildroot.go:166] provisioning hostname "multinode-657900"
	I0219 03:59:14.203797    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:14.901006    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:14.901225    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:14.901225    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:15.893506    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:15.893506    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:15.897176    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 03:59:15.907349    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.233 22 <nil> <nil>}
	I0219 03:59:15.907349    8476 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-657900 && echo "multinode-657900" | sudo tee /etc/hostname
	I0219 03:59:16.058173    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-657900
	
	I0219 03:59:16.059174    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:16.791548    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:16.791548    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:16.791636    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:17.817385    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:17.817466    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:17.822497    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 03:59:17.823386    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.233 22 <nil> <nil>}
	I0219 03:59:17.823386    8476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-657900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-657900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-657900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 03:59:17.978359    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 03:59:17.978454    8476 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 03:59:17.978554    8476 buildroot.go:174] setting up certificates
	I0219 03:59:17.978660    8476 provision.go:83] configureAuth start
	I0219 03:59:17.978757    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:18.683876    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:18.683876    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:18.683876    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:19.692988    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:19.692988    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:19.692988    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:20.421286    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:20.421533    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:20.421649    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:21.432021    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:21.432021    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:21.432231    8476 provision.go:138] copyHostCerts
	I0219 03:59:21.432367    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0219 03:59:21.432655    8476 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 03:59:21.432655    8476 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 03:59:21.433071    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 03:59:21.434050    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0219 03:59:21.434275    8476 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 03:59:21.434275    8476 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 03:59:21.434275    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 03:59:21.435626    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0219 03:59:21.435626    8476 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 03:59:21.435626    8476 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 03:59:21.436154    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 03:59:21.437684    8476 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-657900 san=[172.28.246.233 172.28.246.233 localhost 127.0.0.1 minikube multinode-657900]
	I0219 03:59:21.530669    8476 provision.go:172] copyRemoteCerts
	I0219 03:59:21.540606    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 03:59:21.540606    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:22.237365    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:22.237365    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:22.237365    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:23.285608    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:23.285608    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:23.285608    8476 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 03:59:23.395097    8476 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.8544972s)
	I0219 03:59:23.395097    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0219 03:59:23.395768    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 03:59:23.437292    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0219 03:59:23.437693    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0219 03:59:23.474811    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0219 03:59:23.474811    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 03:59:23.515355    8476 provision.go:86] duration metric: configureAuth took 5.5366451s
	I0219 03:59:23.515422    8476 buildroot.go:189] setting minikube options for container-runtime
	I0219 03:59:23.515531    8476 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 03:59:23.515531    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:24.217493    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:24.217493    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:24.217493    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:25.291520    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:25.291576    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:25.294852    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 03:59:25.295938    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.233 22 <nil> <nil>}
	I0219 03:59:25.295938    8476 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 03:59:25.422786    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 03:59:25.422961    8476 buildroot.go:70] root file system type: tmpfs
	I0219 03:59:25.423197    8476 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 03:59:25.423335    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:26.144713    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:26.144713    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:26.144713    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:27.190480    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:27.190480    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:27.195621    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 03:59:27.196251    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.233 22 <nil> <nil>}
	I0219 03:59:27.196904    8476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 03:59:27.362379    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 03:59:27.362456    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:28.076768    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:28.076768    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:28.077014    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:29.120516    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:29.120516    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:29.124181    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 03:59:29.124856    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.233 22 <nil> <nil>}
	I0219 03:59:29.124856    8476 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 03:59:30.168036    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 03:59:30.168099    8476 machine.go:91] provisioned docker machine in 15.9644152s
	I0219 03:59:30.168157    8476 client.go:171] LocalClient.Create took 1m0.735361s
	I0219 03:59:30.168157    8476 start.go:167] duration metric: libmachine.API.Create for "multinode-657900" took 1m0.735361s
	I0219 03:59:30.168157    8476 start.go:300] post-start starting for "multinode-657900" (driver="hyperv")
	I0219 03:59:30.168228    8476 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 03:59:30.178385    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 03:59:30.178385    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:30.878006    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:30.878116    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:30.878116    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:31.921164    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:31.921164    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:31.921795    8476 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 03:59:32.030594    8476 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8521535s)
	I0219 03:59:32.042425    8476 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 03:59:32.049241    8476 command_runner.go:130] > NAME=Buildroot
	I0219 03:59:32.049301    8476 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0219 03:59:32.049301    8476 command_runner.go:130] > ID=buildroot
	I0219 03:59:32.049301    8476 command_runner.go:130] > VERSION_ID=2021.02.12
	I0219 03:59:32.049301    8476 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0219 03:59:32.050067    8476 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 03:59:32.050140    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 03:59:32.050544    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 03:59:32.051394    8476 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 03:59:32.051394    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /etc/ssl/certs/101482.pem
	I0219 03:59:32.062162    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 03:59:32.083384    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 03:59:32.121550    8476 start.go:303] post-start completed in 1.9534s
	I0219 03:59:32.124841    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:32.811164    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:32.811231    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:32.811269    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:33.843219    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:33.843219    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:33.843219    8476 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 03:59:33.846747    8476 start.go:128] duration metric: createHost completed in 1m4.4169867s
	I0219 03:59:33.846747    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:34.556103    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:34.556103    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:34.556277    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:35.545956    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:35.545956    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:35.550920    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 03:59:35.551875    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.233 22 <nil> <nil>}
	I0219 03:59:35.551875    8476 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 03:59:35.676196    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676779175.670559425
	
	I0219 03:59:35.676196    8476 fix.go:207] guest clock: 1676779175.670559425
	I0219 03:59:35.676196    8476 fix.go:220] Guest: 2023-02-19 03:59:35.670559425 +0000 GMT Remote: 2023-02-19 03:59:33.846747 +0000 GMT m=+66.329762001 (delta=1.823812425s)
	I0219 03:59:35.676196    8476 fix.go:191] guest clock delta is within tolerance: 1.823812425s
	I0219 03:59:35.676196    8476 start.go:83] releasing machines lock for "multinode-657900", held for 1m6.2464418s
	I0219 03:59:35.676196    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:36.368807    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:36.368807    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:36.368917    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:37.361379    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:37.361769    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:37.365735    8476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 03:59:37.365886    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:37.373363    8476 ssh_runner.go:195] Run: cat /version.json
	I0219 03:59:37.374277    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 03:59:38.120888    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:38.120888    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:38.120888    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 03:59:38.121003    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:38.121003    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:38.121003    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 03:59:39.199256    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:39.199282    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:39.199395    8476 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 03:59:39.217763    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 03:59:39.217763    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 03:59:39.217763    8476 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 03:59:39.290691    8476 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
	I0219 03:59:39.290911    8476 ssh_runner.go:235] Completed: cat /version.json: (1.9166403s)
	I0219 03:59:39.300681    8476 ssh_runner.go:195] Run: systemctl --version
	I0219 03:59:39.425762    8476 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0219 03:59:39.425836    8476 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.0600275s)
	I0219 03:59:39.425929    8476 command_runner.go:130] > systemd 247 (247)
	I0219 03:59:39.425929    8476 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0219 03:59:39.436651    8476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0219 03:59:39.444188    8476 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0219 03:59:39.444188    8476 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 03:59:39.452281    8476 ssh_runner.go:195] Run: which cri-dockerd
	I0219 03:59:39.458513    8476 command_runner.go:130] > /usr/bin/cri-dockerd
	I0219 03:59:39.468008    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 03:59:39.484167    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 03:59:39.520132    8476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 03:59:39.539755    8476 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0219 03:59:39.540197    8476 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 03:59:39.540273    8476 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 03:59:39.547758    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 03:59:39.576319    8476 docker.go:630] Got preloaded images: 
	I0219 03:59:39.576319    8476 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 03:59:39.586174    8476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 03:59:39.601607    8476 command_runner.go:139] > {"Repositories":{}}
	I0219 03:59:39.611559    8476 ssh_runner.go:195] Run: which lz4
	I0219 03:59:39.616453    8476 command_runner.go:130] > /usr/bin/lz4
	I0219 03:59:39.616453    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0219 03:59:39.625745    8476 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 03:59:39.631384    8476 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 03:59:39.631384    8476 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 03:59:39.631565    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 03:59:41.794339    8476 docker.go:594] Took 2.177511 seconds to copy over tarball
	I0219 03:59:41.803693    8476 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 03:59:52.137886    8476 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.3335653s)
	I0219 03:59:52.137975    8476 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 03:59:52.199924    8476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 03:59:52.215552    8476 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.3":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.6-0":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c":"sha256:fce326961ae2d51a5f726883fd59d
2a8c2ccc3e45d3bb859882db58e422e59e7"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed0
3c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0219 03:59:52.215810    8476 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 03:59:52.261903    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 03:59:52.427685    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 03:59:54.686030    8476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2583001s)
	I0219 03:59:54.686196    8476 start.go:485] detecting cgroup driver to use...
	I0219 03:59:54.686196    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 03:59:54.707254    8476 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0219 03:59:54.707320    8476 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0219 03:59:54.717645    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 03:59:54.743851    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 03:59:54.759371    8476 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 03:59:54.770347    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 03:59:54.797553    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 03:59:54.822088    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 03:59:54.844574    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 03:59:54.871886    8476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 03:59:54.896146    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 03:59:54.923223    8476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 03:59:54.935712    8476 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0219 03:59:54.944494    8476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 03:59:54.966289    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 03:59:55.111066    8476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 03:59:55.135024    8476 start.go:485] detecting cgroup driver to use...
	I0219 03:59:55.145795    8476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 03:59:55.171889    8476 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0219 03:59:55.171889    8476 command_runner.go:130] > [Unit]
	I0219 03:59:55.171974    8476 command_runner.go:130] > Description=Docker Application Container Engine
	I0219 03:59:55.171974    8476 command_runner.go:130] > Documentation=https://docs.docker.com
	I0219 03:59:55.171974    8476 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0219 03:59:55.171974    8476 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0219 03:59:55.171974    8476 command_runner.go:130] > StartLimitBurst=3
	I0219 03:59:55.171974    8476 command_runner.go:130] > StartLimitIntervalSec=60
	I0219 03:59:55.171974    8476 command_runner.go:130] > [Service]
	I0219 03:59:55.171974    8476 command_runner.go:130] > Type=notify
	I0219 03:59:55.171974    8476 command_runner.go:130] > Restart=on-failure
	I0219 03:59:55.171974    8476 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0219 03:59:55.172060    8476 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0219 03:59:55.172091    8476 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0219 03:59:55.172091    8476 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0219 03:59:55.172091    8476 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0219 03:59:55.172091    8476 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0219 03:59:55.172091    8476 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0219 03:59:55.172091    8476 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0219 03:59:55.172091    8476 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0219 03:59:55.172187    8476 command_runner.go:130] > ExecStart=
	I0219 03:59:55.172187    8476 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0219 03:59:55.172187    8476 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0219 03:59:55.172187    8476 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0219 03:59:55.172187    8476 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0219 03:59:55.172187    8476 command_runner.go:130] > LimitNOFILE=infinity
	I0219 03:59:55.172187    8476 command_runner.go:130] > LimitNPROC=infinity
	I0219 03:59:55.172258    8476 command_runner.go:130] > LimitCORE=infinity
	I0219 03:59:55.172279    8476 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0219 03:59:55.172279    8476 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0219 03:59:55.172279    8476 command_runner.go:130] > TasksMax=infinity
	I0219 03:59:55.172279    8476 command_runner.go:130] > TimeoutStartSec=0
	I0219 03:59:55.172279    8476 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0219 03:59:55.172279    8476 command_runner.go:130] > Delegate=yes
	I0219 03:59:55.172279    8476 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0219 03:59:55.172279    8476 command_runner.go:130] > KillMode=process
	I0219 03:59:55.172279    8476 command_runner.go:130] > [Install]
	I0219 03:59:55.172279    8476 command_runner.go:130] > WantedBy=multi-user.target
	I0219 03:59:55.181837    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 03:59:55.209032    8476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 03:59:55.239980    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 03:59:55.269985    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 03:59:55.297010    8476 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 03:59:55.369663    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 03:59:55.394782    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 03:59:55.424443    8476 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 03:59:55.425293    8476 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 03:59:55.435869    8476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 03:59:55.613275    8476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 03:59:55.788652    8476 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 03:59:55.788781    8476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 03:59:55.838264    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 03:59:56.011958    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 03:59:57.486832    8476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4748787s)
	I0219 03:59:57.494820    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 03:59:57.638989    8476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 03:59:57.806511    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 03:59:57.989544    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 03:59:58.146027    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 03:59:58.171167    8476 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 03:59:58.181695    8476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 03:59:58.189125    8476 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0219 03:59:58.189125    8476 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0219 03:59:58.189125    8476 command_runner.go:130] > Device: 16h/22d	Inode: 971         Links: 1
	I0219 03:59:58.189125    8476 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0219 03:59:58.189125    8476 command_runner.go:130] > Access: 2023-02-19 03:59:58.156199363 +0000
	I0219 03:59:58.189125    8476 command_runner.go:130] > Modify: 2023-02-19 03:59:58.156199363 +0000
	I0219 03:59:58.189125    8476 command_runner.go:130] > Change: 2023-02-19 03:59:58.161199135 +0000
	I0219 03:59:58.189125    8476 command_runner.go:130] >  Birth: -
	I0219 03:59:58.189125    8476 start.go:553] Will wait 60s for crictl version
	I0219 03:59:58.199562    8476 ssh_runner.go:195] Run: which crictl
	I0219 03:59:58.206155    8476 command_runner.go:130] > /usr/bin/crictl
	I0219 03:59:58.215373    8476 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 03:59:58.381239    8476 command_runner.go:130] > Version:  0.1.0
	I0219 03:59:58.381239    8476 command_runner.go:130] > RuntimeName:  docker
	I0219 03:59:58.381239    8476 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0219 03:59:58.381239    8476 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0219 03:59:58.381239    8476 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 03:59:58.390272    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 03:59:58.431389    8476 command_runner.go:130] > 20.10.23
	I0219 03:59:58.441577    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 03:59:58.477892    8476 command_runner.go:130] > 20.10.23
	I0219 03:59:58.482639    8476 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 03:59:58.482822    8476 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 03:59:58.489318    8476 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 03:59:58.489318    8476 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 03:59:58.489318    8476 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 03:59:58.489318    8476 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 03:59:58.492654    8476 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 03:59:58.492691    8476 ip.go:210] interface addr: 172.28.240.1/20
	I0219 03:59:58.501528    8476 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 03:59:58.507530    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 03:59:58.526329    8476 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 03:59:58.534743    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 03:59:58.570800    8476 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0219 03:59:58.570883    8476 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0219 03:59:58.570883    8476 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0219 03:59:58.570883    8476 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0219 03:59:58.570883    8476 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0219 03:59:58.570883    8476 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0219 03:59:58.570883    8476 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0219 03:59:58.570986    8476 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 03:59:58.571058    8476 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 03:59:58.571058    8476 docker.go:560] Images already preloaded, skipping extraction
	I0219 03:59:58.578707    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 03:59:58.611695    8476 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0219 03:59:58.611695    8476 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0219 03:59:58.611695    8476 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0219 03:59:58.611695    8476 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0219 03:59:58.611695    8476 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0219 03:59:58.611695    8476 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0219 03:59:58.611695    8476 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0219 03:59:58.611695    8476 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 03:59:58.611695    8476 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 03:59:58.611695    8476 cache_images.go:84] Images are preloaded, skipping loading
	I0219 03:59:58.619331    8476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 03:59:58.656958    8476 command_runner.go:130] > cgroupfs
	I0219 03:59:58.656958    8476 cni.go:84] Creating CNI manager for ""
	I0219 03:59:58.656958    8476 cni.go:136] 1 nodes found, recommending kindnet
	I0219 03:59:58.656958    8476 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 03:59:58.656958    8476 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.246.233 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-657900 NodeName:multinode-657900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.246.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.246.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 03:59:58.657563    8476 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.246.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-657900"
	  kubeletExtraArgs:
	    node-ip: 172.28.246.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.246.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 03:59:58.657885    8476 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-657900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.246.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 03:59:58.667645    8476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 03:59:58.681500    8476 command_runner.go:130] > kubeadm
	I0219 03:59:58.681500    8476 command_runner.go:130] > kubectl
	I0219 03:59:58.681500    8476 command_runner.go:130] > kubelet
	I0219 03:59:58.681500    8476 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 03:59:58.691291    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 03:59:58.705285    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0219 03:59:58.732810    8476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 03:59:58.760319    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0219 03:59:58.799337    8476 ssh_runner.go:195] Run: grep 172.28.246.233	control-plane.minikube.internal$ /etc/hosts
	I0219 03:59:58.804530    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.246.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 03:59:58.821138    8476 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900 for IP: 172.28.246.233
	I0219 03:59:58.821138    8476 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:59:58.821996    8476 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 03:59:58.822143    8476 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 03:59:58.822800    8476 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\client.key
	I0219 03:59:58.822800    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\client.crt with IP's: []
	I0219 03:59:58.975890    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\client.crt ...
	I0219 03:59:58.975890    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\client.crt: {Name:mk17338f6045d6f057dd0ed00139282e59d96165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:59:58.977867    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\client.key ...
	I0219 03:59:58.977867    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\client.key: {Name:mk2e5c562aea290f65721b726500cf787ce042c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:59:58.979180    8476 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.6c1995d7
	I0219 03:59:58.979180    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.6c1995d7 with IP's: [172.28.246.233 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 03:59:59.150790    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.6c1995d7 ...
	I0219 03:59:59.150790    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.6c1995d7: {Name:mk5663dc151e228ed5ba38c101d42ca6e5ac9e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:59:59.151577    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.6c1995d7 ...
	I0219 03:59:59.151577    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.6c1995d7: {Name:mk9eadc1c830520e289f6ff0024ae2b0aa681915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:59:59.153144    8476 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.6c1995d7 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt
	I0219 03:59:59.160142    8476 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.6c1995d7 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key
	I0219 03:59:59.161129    8476 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key
	I0219 03:59:59.162239    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.crt with IP's: []
	I0219 03:59:59.478214    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.crt ...
	I0219 03:59:59.478214    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.crt: {Name:mk6db1c697fa3cb5824a22b86ed78e16053672f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:59:59.479852    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key ...
	I0219 03:59:59.479852    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key: {Name:mk10237f0324b3b41d458eec3528eefde011818e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 03:59:59.480856    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0219 03:59:59.481781    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0219 03:59:59.481781    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0219 03:59:59.488634    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0219 03:59:59.488972    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0219 03:59:59.488972    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0219 03:59:59.488972    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0219 03:59:59.489521    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0219 03:59:59.489688    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 03:59:59.490171    8476 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 03:59:59.490171    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 03:59:59.490171    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 03:59:59.490747    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 03:59:59.490830    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 03:59:59.491366    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 03:59:59.491557    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem -> /usr/share/ca-certificates/10148.pem
	I0219 03:59:59.491557    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /usr/share/ca-certificates/101482.pem
	I0219 03:59:59.491557    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0219 03:59:59.492714    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 03:59:59.536351    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 03:59:59.575667    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 03:59:59.613418    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0219 03:59:59.650945    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 03:59:59.689357    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 03:59:59.731972    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 03:59:59.770078    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 03:59:59.809455    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 03:59:59.846925    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 03:59:59.883139    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 03:59:59.926925    8476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 03:59:59.961343    8476 ssh_runner.go:195] Run: openssl version
	I0219 03:59:59.968960    8476 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0219 03:59:59.979121    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:00:00.006606    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:00:00.014782    8476 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:00:00.014893    8476 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:00:00.024266    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:00:00.033558    8476 command_runner.go:130] > 3ec20f2e
	I0219 04:00:00.042678    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:00:00.070919    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:00:00.097478    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:00:00.102960    8476 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:00:00.102960    8476 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:00:00.111576    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:00:00.118811    8476 command_runner.go:130] > b5213941
	I0219 04:00:00.127636    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:00:00.154693    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:00:00.178292    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:00:00.184374    8476 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:00:00.184374    8476 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:00:00.192873    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:00:00.200968    8476 command_runner.go:130] > 51391683
	I0219 04:00:00.209868    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:00:00.224893    8476 kubeadm.go:401] StartCluster: {Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.233 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:00:00.232823    8476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:00:00.271549    8476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:00:00.287420    8476 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0219 04:00:00.287420    8476 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0219 04:00:00.287526    8476 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0219 04:00:00.298430    8476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:00:00.320425    8476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:00:00.334763    8476 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0219 04:00:00.334763    8476 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0219 04:00:00.334763    8476 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0219 04:00:00.334763    8476 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:00:00.334885    8476 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:00:00.334956    8476 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:00:00.567354    8476 kubeadm.go:322] W0219 04:00:00.558784    1494 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:00:00.567354    8476 command_runner.go:130] ! W0219 04:00:00.558784    1494 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:00:01.081282    8476 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:00:01.081282    8476 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:00:19.390435    8476 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0219 04:00:19.390537    8476 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0219 04:00:19.390617    8476 command_runner.go:130] > [preflight] Running pre-flight checks
	I0219 04:00:19.390617    8476 kubeadm.go:322] [preflight] Running pre-flight checks
	I0219 04:00:19.390901    8476 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:00:19.390941    8476 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:00:19.390983    8476 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:00:19.390983    8476 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:00:19.391394    8476 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:00:19.391480    8476 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:00:19.391795    8476 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:00:19.391795    8476 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:00:19.394704    8476 out.go:204]   - Generating certificates and keys ...
	I0219 04:00:19.394936    8476 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0219 04:00:19.395020    8476 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0219 04:00:19.395200    8476 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0219 04:00:19.395200    8476 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0219 04:00:19.395455    8476 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:00:19.395455    8476 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:00:19.395533    8476 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:00:19.395610    8476 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:00:19.395828    8476 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0219 04:00:19.395864    8476 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0219 04:00:19.396027    8476 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0219 04:00:19.396027    8476 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0219 04:00:19.396145    8476 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0219 04:00:19.396145    8476 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0219 04:00:19.396145    8476 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-657900] and IPs [172.28.246.233 127.0.0.1 ::1]
	I0219 04:00:19.396145    8476 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-657900] and IPs [172.28.246.233 127.0.0.1 ::1]
	I0219 04:00:19.396679    8476 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0219 04:00:19.396738    8476 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0219 04:00:19.396854    8476 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-657900] and IPs [172.28.246.233 127.0.0.1 ::1]
	I0219 04:00:19.396854    8476 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-657900] and IPs [172.28.246.233 127.0.0.1 ::1]
	I0219 04:00:19.396854    8476 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:00:19.396854    8476 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:00:19.397378    8476 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:00:19.397455    8476 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:00:19.397557    8476 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0219 04:00:19.397557    8476 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0219 04:00:19.397557    8476 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:00:19.397557    8476 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:00:19.397557    8476 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:00:19.397557    8476 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:00:19.397557    8476 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:00:19.397557    8476 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:00:19.397557    8476 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:00:19.397557    8476 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:00:19.398188    8476 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:00:19.398188    8476 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:00:19.398188    8476 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:00:19.398188    8476 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:00:19.398188    8476 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:00:19.398188    8476 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:00:19.398722    8476 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0219 04:00:19.398722    8476 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0219 04:00:19.398906    8476 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:00:19.401632    8476 out.go:204]   - Booting up control plane ...
	I0219 04:00:19.398943    8476 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:00:19.401715    8476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:00:19.401715    8476 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:00:19.401715    8476 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:00:19.402230    8476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:00:19.402284    8476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:00:19.402425    8476 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:00:19.402691    8476 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:00:19.402691    8476 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:00:19.403015    8476 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:00:19.403015    8476 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:00:19.403281    8476 command_runner.go:130] > [apiclient] All control plane components are healthy after 13.003402 seconds
	I0219 04:00:19.403344    8476 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.003402 seconds
	I0219 04:00:19.403422    8476 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:00:19.403422    8476 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:00:19.403422    8476 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:00:19.403422    8476 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:00:19.403422    8476 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:00:19.403422    8476 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:00:19.404287    8476 command_runner.go:130] > [mark-control-plane] Marking the node multinode-657900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:00:19.404287    8476 kubeadm.go:322] [mark-control-plane] Marking the node multinode-657900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:00:19.404287    8476 command_runner.go:130] > [bootstrap-token] Using token: dxtbej.sfluifilcnabhsg2
	I0219 04:00:19.404287    8476 kubeadm.go:322] [bootstrap-token] Using token: dxtbej.sfluifilcnabhsg2
	I0219 04:00:19.408072    8476 out.go:204]   - Configuring RBAC rules ...
	I0219 04:00:19.408334    8476 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:00:19.408396    8476 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:00:19.408674    8476 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:00:19.408674    8476 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:00:19.408946    8476 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:00:19.409009    8476 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:00:19.409009    8476 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:00:19.409009    8476 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:00:19.409009    8476 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:00:19.409009    8476 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:00:19.409009    8476 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:00:19.409009    8476 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:00:19.409961    8476 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:00:19.409961    8476 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:00:19.410041    8476 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0219 04:00:19.410102    8476 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0219 04:00:19.410188    8476 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0219 04:00:19.410250    8476 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0219 04:00:19.410310    8476 kubeadm.go:322] 
	I0219 04:00:19.410428    8476 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0219 04:00:19.410527    8476 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0219 04:00:19.410527    8476 kubeadm.go:322] 
	I0219 04:00:19.410926    8476 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0219 04:00:19.410926    8476 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0219 04:00:19.410992    8476 kubeadm.go:322] 
	I0219 04:00:19.411075    8476 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0219 04:00:19.411141    8476 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0219 04:00:19.411274    8476 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:00:19.411340    8476 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:00:19.411519    8476 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:00:19.411519    8476 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:00:19.411595    8476 kubeadm.go:322] 
	I0219 04:00:19.411729    8476 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0219 04:00:19.411729    8476 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0219 04:00:19.411729    8476 kubeadm.go:322] 
	I0219 04:00:19.411951    8476 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:00:19.411951    8476 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:00:19.411951    8476 kubeadm.go:322] 
	I0219 04:00:19.412070    8476 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0219 04:00:19.412070    8476 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0219 04:00:19.412189    8476 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:00:19.412189    8476 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:00:19.412417    8476 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:00:19.412417    8476 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:00:19.412417    8476 kubeadm.go:322] 
	I0219 04:00:19.412703    8476 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:00:19.412703    8476 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:00:19.412863    8476 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0219 04:00:19.412863    8476 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0219 04:00:19.412863    8476 kubeadm.go:322] 
	I0219 04:00:19.412863    8476 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dxtbej.sfluifilcnabhsg2 \
	I0219 04:00:19.412863    8476 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token dxtbej.sfluifilcnabhsg2 \
	I0219 04:00:19.412863    8476 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:00:19.412863    8476 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:00:19.412863    8476 kubeadm.go:322] 	--control-plane 
	I0219 04:00:19.412863    8476 command_runner.go:130] > 	--control-plane 
	I0219 04:00:19.412863    8476 kubeadm.go:322] 
	I0219 04:00:19.413564    8476 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:00:19.413564    8476 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:00:19.413564    8476 kubeadm.go:322] 
	I0219 04:00:19.413564    8476 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dxtbej.sfluifilcnabhsg2 \
	I0219 04:00:19.413564    8476 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dxtbej.sfluifilcnabhsg2 \
	I0219 04:00:19.413564    8476 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:00:19.413564    8476 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:00:19.413564    8476 cni.go:84] Creating CNI manager for ""
	I0219 04:00:19.413564    8476 cni.go:136] 1 nodes found, recommending kindnet
	I0219 04:00:19.417571    8476 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0219 04:00:19.428683    8476 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0219 04:00:19.436979    8476 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0219 04:00:19.436979    8476 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0219 04:00:19.436979    8476 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0219 04:00:19.436979    8476 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0219 04:00:19.436979    8476 command_runner.go:130] > Access: 2023-02-19 03:59:12.267417100 +0000
	I0219 04:00:19.436979    8476 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0219 04:00:19.436979    8476 command_runner.go:130] > Change: 2023-02-19 03:59:03.008000000 +0000
	I0219 04:00:19.436979    8476 command_runner.go:130] >  Birth: -
	I0219 04:00:19.437896    8476 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0219 04:00:19.437896    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0219 04:00:19.501865    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0219 04:00:20.931673    8476 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0219 04:00:20.953168    8476 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0219 04:00:20.973313    8476 command_runner.go:130] > serviceaccount/kindnet created
	I0219 04:00:21.001656    8476 command_runner.go:130] > daemonset.apps/kindnet created
	I0219 04:00:21.009645    8476 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5069189s)
	I0219 04:00:21.009752    8476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:00:21.023462    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:21.025559    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61 minikube.k8s.io/name=multinode-657900 minikube.k8s.io/updated_at=2023_02_19T04_00_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:21.034786    8476 command_runner.go:130] > -16
	I0219 04:00:21.034902    8476 ops.go:34] apiserver oom_adj: -16
	I0219 04:00:21.234203    8476 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0219 04:00:21.236215    8476 command_runner.go:130] > node/multinode-657900 labeled
	I0219 04:00:21.244209    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:21.366011    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:21.893426    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:22.002371    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:22.383106    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:22.492475    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:22.886861    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:23.005516    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:23.389017    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:23.497432    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:23.896426    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:24.001814    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:24.383879    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:24.519946    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:24.889915    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:25.000404    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:25.385799    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:25.506983    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:25.887564    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:25.995394    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:26.388831    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:26.516436    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:26.881455    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:27.005493    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:27.383301    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:27.513259    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:27.892045    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:28.024401    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:28.378574    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:28.521547    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:28.887399    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:29.020445    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:29.391514    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:29.523752    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:29.882619    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:30.029003    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:30.381989    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:30.507528    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:30.888603    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:31.033269    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:31.379139    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:31.494070    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:31.882613    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:32.031691    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:32.386802    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:32.576193    8476 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0219 04:00:32.889885    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:00:33.054232    8476 command_runner.go:130] > NAME      SECRETS   AGE
	I0219 04:00:33.054232    8476 command_runner.go:130] > default   0         1s
	I0219 04:00:33.054232    8476 kubeadm.go:1073] duration metric: took 12.0444381s to wait for elevateKubeSystemPrivileges.
	I0219 04:00:33.054232    8476 kubeadm.go:403] StartCluster complete in 32.8295174s
	I0219 04:00:33.054232    8476 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:00:33.054232    8476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:00:33.056135    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:00:33.057594    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:00:33.057594    8476 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:00:33.057754    8476 addons.go:65] Setting storage-provisioner=true in profile "multinode-657900"
	I0219 04:00:33.057754    8476 addons.go:65] Setting default-storageclass=true in profile "multinode-657900"
	I0219 04:00:33.057754    8476 addons.go:227] Setting addon storage-provisioner=true in "multinode-657900"
	I0219 04:00:33.057754    8476 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-657900"
	I0219 04:00:33.057754    8476 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:00:33.057754    8476 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:00:33.058568    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:00:33.059082    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:00:33.066599    8476 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:00:33.066599    8476 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.246.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:00:33.066599    8476 round_trippers.go:463] GET https://172.28.246.233:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0219 04:00:33.066599    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:33.066599    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:33.066599    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:33.066599    8476 cert_rotation.go:137] Starting client certificate rotation controller
	I0219 04:00:33.103302    8476 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0219 04:00:33.103729    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:33.103729    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:33.103729    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:33.103729    8476 round_trippers.go:580]     Content-Length: 291
	I0219 04:00:33.103729    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:33 GMT
	I0219 04:00:33.103729    8476 round_trippers.go:580]     Audit-Id: 331c923c-19b3-4167-b805-f1fc763f4dae
	I0219 04:00:33.103729    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:33.103729    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:33.103883    8476 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"234","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0219 04:00:33.104315    8476 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"234","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0219 04:00:33.104741    8476 round_trippers.go:463] PUT https://172.28.246.233:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0219 04:00:33.104741    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:33.104824    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:33.104824    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:33.104824    8476 round_trippers.go:473]     Content-Type: application/json
	I0219 04:00:33.117864    8476 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0219 04:00:33.117864    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:33.117864    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:33.117864    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:33.117864    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:33.117864    8476 round_trippers.go:580]     Content-Length: 291
	I0219 04:00:33.117864    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:33 GMT
	I0219 04:00:33.118498    8476 round_trippers.go:580]     Audit-Id: c76e7534-b067-4cc4-9855-ec9069f25636
	I0219 04:00:33.118498    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:33.118576    8476 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"324","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0219 04:00:33.277216    8476 command_runner.go:130] > apiVersion: v1
	I0219 04:00:33.277216    8476 command_runner.go:130] > data:
	I0219 04:00:33.277216    8476 command_runner.go:130] >   Corefile: |
	I0219 04:00:33.277216    8476 command_runner.go:130] >     .:53 {
	I0219 04:00:33.277216    8476 command_runner.go:130] >         errors
	I0219 04:00:33.277216    8476 command_runner.go:130] >         health {
	I0219 04:00:33.277216    8476 command_runner.go:130] >            lameduck 5s
	I0219 04:00:33.277216    8476 command_runner.go:130] >         }
	I0219 04:00:33.277216    8476 command_runner.go:130] >         ready
	I0219 04:00:33.277216    8476 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0219 04:00:33.277216    8476 command_runner.go:130] >            pods insecure
	I0219 04:00:33.277216    8476 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0219 04:00:33.277216    8476 command_runner.go:130] >            ttl 30
	I0219 04:00:33.277216    8476 command_runner.go:130] >         }
	I0219 04:00:33.277216    8476 command_runner.go:130] >         prometheus :9153
	I0219 04:00:33.277216    8476 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0219 04:00:33.277216    8476 command_runner.go:130] >            max_concurrent 1000
	I0219 04:00:33.277216    8476 command_runner.go:130] >         }
	I0219 04:00:33.277216    8476 command_runner.go:130] >         cache 30
	I0219 04:00:33.277216    8476 command_runner.go:130] >         loop
	I0219 04:00:33.277216    8476 command_runner.go:130] >         reload
	I0219 04:00:33.277216    8476 command_runner.go:130] >         loadbalance
	I0219 04:00:33.277216    8476 command_runner.go:130] >     }
	I0219 04:00:33.277216    8476 command_runner.go:130] > kind: ConfigMap
	I0219 04:00:33.277216    8476 command_runner.go:130] > metadata:
	I0219 04:00:33.277216    8476 command_runner.go:130] >   creationTimestamp: "2023-02-19T04:00:19Z"
	I0219 04:00:33.277216    8476 command_runner.go:130] >   name: coredns
	I0219 04:00:33.277216    8476 command_runner.go:130] >   namespace: kube-system
	I0219 04:00:33.277216    8476 command_runner.go:130] >   resourceVersion: "230"
	I0219 04:00:33.277216    8476 command_runner.go:130] >   uid: 25821aee-fb16-415b-ac4e-9df69cd5c6ad
	I0219 04:00:33.277216    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0219 04:00:33.626933    8476 round_trippers.go:463] GET https://172.28.246.233:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0219 04:00:33.626933    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:33.627017    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:33.627017    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:33.628898    8476 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:00:33.628898    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:33.628898    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:33 GMT
	I0219 04:00:33.629810    8476 round_trippers.go:580]     Audit-Id: c9cc51b7-d137-4555-8c82-3c3d03f8bdfc
	I0219 04:00:33.629810    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:33.629810    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:33.629810    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:33.629810    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:33.629875    8476 round_trippers.go:580]     Content-Length: 291
	I0219 04:00:33.629932    8476 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"365","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0219 04:00:33.630083    8476 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-657900" context rescaled to 1 replicas
	I0219 04:00:33.630083    8476 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.246.233 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:00:33.633470    8476 out.go:177] * Verifying Kubernetes components...
	I0219 04:00:33.645179    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:00:33.881147    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:00:33.881488    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:33.882507    8476 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:00:33.883225    8476 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.246.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:00:33.883827    8476 round_trippers.go:463] GET https://172.28.246.233:8443/apis/storage.k8s.io/v1/storageclasses
	I0219 04:00:33.883827    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:33.883827    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:33.883827    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:33.883827    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:00:33.883827    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:33.892471    8476 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:00:33.891497    8476 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:00:33.896636    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:33.896636    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:33.896636    8476 round_trippers.go:580]     Content-Length: 109
	I0219 04:00:33.896636    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:33 GMT
	I0219 04:00:33.896636    8476 round_trippers.go:580]     Audit-Id: 217dc09c-d686-4bb3-9753-9f7474efba75
	I0219 04:00:33.896636    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:33.896636    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:33.896636    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:33.896636    8476 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"365"},"items":[]}
	I0219 04:00:33.896636    8476 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:00:33.896900    8476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0219 04:00:33.896900    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:00:33.897166    8476 addons.go:227] Setting addon default-storageclass=true in "multinode-657900"
	I0219 04:00:33.897282    8476 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:00:33.897500    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:00:33.994916    8476 command_runner.go:130] > configmap/coredns replaced
	I0219 04:00:33.994916    8476 start.go:921] {"host.minikube.internal": 172.28.240.1} host record injected into CoreDNS's ConfigMap
	I0219 04:00:33.997117    8476 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:00:33.998569    8476 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.246.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:00:33.999919    8476 node_ready.go:35] waiting up to 6m0s for node "multinode-657900" to be "Ready" ...
	I0219 04:00:33.999919    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:33.999919    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:33.999919    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:33.999919    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:34.009233    8476 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0219 04:00:34.009340    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:34.009340    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:34.009340    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:34.009340    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:34.009340    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:34 GMT
	I0219 04:00:34.009340    8476 round_trippers.go:580]     Audit-Id: 25cab7e2-339d-4ac8-8b65-24bdca2d8c5b
	I0219 04:00:34.009501    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:34.009755    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:34.516370    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:34.516424    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:34.516424    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:34.516424    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:34.523794    8476 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:00:34.523794    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:34.524791    8476 round_trippers.go:580]     Audit-Id: b8384893-de67-4986-baff-22605e790063
	I0219 04:00:34.524791    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:34.524791    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:34.524791    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:34.524791    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:34.524791    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:34 GMT
	I0219 04:00:34.524791    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:34.703908    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:00:34.704042    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:34.703908    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:00:34.704042    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:34.704042    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:00:34.704042    8476 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0219 04:00:34.704042    8476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0219 04:00:34.704042    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:00:35.022412    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:35.022412    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:35.022512    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:35.022512    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:35.025980    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:35.025980    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:35.026720    8476 round_trippers.go:580]     Audit-Id: 51c731d1-296f-4c11-9c79-de08bdfa74e4
	I0219 04:00:35.026720    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:35.026720    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:35.026720    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:35.026720    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:35.026720    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:35 GMT
	I0219 04:00:35.027140    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:35.450379    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:00:35.450435    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:35.450435    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:00:35.513973    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:35.513973    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:35.514059    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:35.514059    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:35.517550    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:35.517724    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:35.517890    8476 round_trippers.go:580]     Audit-Id: ff3a1ffe-eb96-44c2-b51f-f64c6445be81
	I0219 04:00:35.517890    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:35.517890    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:35.517890    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:35.517890    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:35.517890    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:35 GMT
	I0219 04:00:35.517890    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:35.766272    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 04:00:35.766536    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:35.766745    8476 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:00:35.900147    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:00:36.019746    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:36.019746    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:36.019746    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:36.019746    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:36.022322    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:36.022322    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:36.022322    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:36 GMT
	I0219 04:00:36.022322    8476 round_trippers.go:580]     Audit-Id: 6ce2e538-5133-49e1-8946-d229bb3c5dc7
	I0219 04:00:36.022322    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:36.022322    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:36.022322    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:36.022322    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:36.023230    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:36.023944    8476 node_ready.go:58] node "multinode-657900" has status "Ready":"False"
	I0219 04:00:36.354084    8476 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0219 04:00:36.354168    8476 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0219 04:00:36.354168    8476 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0219 04:00:36.354168    8476 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0219 04:00:36.354168    8476 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0219 04:00:36.354168    8476 command_runner.go:130] > pod/storage-provisioner created
	I0219 04:00:36.511918    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 04:00:36.511918    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:36.512088    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:36.512204    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:36.512254    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:36.512254    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:36.512409    8476 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:00:36.516003    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:36.516003    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:36.516003    8476 round_trippers.go:580]     Audit-Id: fa4bcbd2-9429-43b6-b551-fd72d68f734b
	I0219 04:00:36.516003    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:36.516003    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:36.516687    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:36.516687    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:36.516687    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:36 GMT
	I0219 04:00:36.516965    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:36.649258    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0219 04:00:36.929130    8476 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0219 04:00:36.937721    8476 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0219 04:00:36.944464    8476 addons.go:492] enable addons completed in 3.8868823s: enabled=[storage-provisioner default-storageclass]
	I0219 04:00:37.017054    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:37.017054    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:37.017054    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:37.017054    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:37.021500    8476 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:00:37.021500    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:37.021500    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:37.021500    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:37 GMT
	I0219 04:00:37.021500    8476 round_trippers.go:580]     Audit-Id: da441361-e749-49a1-9dd2-815b12621096
	I0219 04:00:37.021500    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:37.021500    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:37.021500    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:37.022130    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:37.518055    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:37.518055    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:37.518055    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:37.518055    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:37.521763    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:37.521763    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:37.521763    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:37.521763    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:37.521763    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:37 GMT
	I0219 04:00:37.521763    8476 round_trippers.go:580]     Audit-Id: 1645cf1c-d668-40ca-a163-35abc7f4b554
	I0219 04:00:37.521763    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:37.521763    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:37.522804    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:38.020569    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:38.020569    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:38.020569    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:38.020569    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:38.024822    8476 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:00:38.024822    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:38.024822    8476 round_trippers.go:580]     Audit-Id: 3603eb94-cd7c-4341-bec8-904743b6c451
	I0219 04:00:38.024822    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:38.024822    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:38.024822    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:38.024822    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:38.024822    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:38 GMT
	I0219 04:00:38.025216    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:38.025829    8476 node_ready.go:58] node "multinode-657900" has status "Ready":"False"
	I0219 04:00:38.511055    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:38.511055    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:38.511055    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:38.511055    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:38.514582    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:38.514582    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:38.514582    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:38 GMT
	I0219 04:00:38.514582    8476 round_trippers.go:580]     Audit-Id: c8140922-23d5-4bd8-bfa9-e99172dc735c
	I0219 04:00:38.514582    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:38.514582    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:38.514582    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:38.514582    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:38.514980    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:39.022736    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:39.022736    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:39.022736    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:39.022736    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:39.026476    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:39.026476    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:39.026476    8476 round_trippers.go:580]     Audit-Id: 3292c8f6-5f81-466a-b89c-4e8c4155650f
	I0219 04:00:39.026476    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:39.026476    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:39.026476    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:39.026476    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:39.026476    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:39 GMT
	I0219 04:00:39.027214    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:39.522336    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:39.522336    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:39.522336    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:39.522336    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:39.527133    8476 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:00:39.527133    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:39.527133    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:39.527133    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:39 GMT
	I0219 04:00:39.527133    8476 round_trippers.go:580]     Audit-Id: 70643dce-7462-4d94-9f73-9bf72f827490
	I0219 04:00:39.527133    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:39.527133    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:39.527133    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:39.538314    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:40.015557    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:40.015684    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:40.015684    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:40.015684    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:40.019043    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:40.019171    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:40.019171    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:40.019171    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:40 GMT
	I0219 04:00:40.019171    8476 round_trippers.go:580]     Audit-Id: 2d6b75b4-3fde-4709-b4ed-b18cb9c81e55
	I0219 04:00:40.019238    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:40.019238    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:40.019238    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:40.019524    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:40.527015    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:40.527090    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:40.527090    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:40.527090    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:40.530195    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:40.530195    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:40.530195    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:40 GMT
	I0219 04:00:40.530195    8476 round_trippers.go:580]     Audit-Id: ad7e99f6-1ac5-42bb-8129-b310ce05f948
	I0219 04:00:40.530195    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:40.530195    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:40.531124    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:40.531124    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:40.531506    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:40.532060    8476 node_ready.go:58] node "multinode-657900" has status "Ready":"False"
	I0219 04:00:41.018641    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:41.018693    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:41.018754    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:41.018754    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:41.022093    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:41.022093    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:41.022093    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:41.022093    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:41.022093    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:41 GMT
	I0219 04:00:41.022093    8476 round_trippers.go:580]     Audit-Id: f0ade353-c05a-4fec-b143-5ca18d3b6def
	I0219 04:00:41.022093    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:41.022585    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:41.023017    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:41.512789    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:41.513192    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:41.513270    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:41.513270    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:41.516597    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:41.516861    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:41.516861    8476 round_trippers.go:580]     Audit-Id: 918707ac-b5fb-4ed9-9564-9127168dfa25
	I0219 04:00:41.516861    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:41.516921    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:41.516921    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:41.516921    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:41.516921    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:41 GMT
	I0219 04:00:41.517637    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:42.016651    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:42.016778    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:42.016778    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:42.016837    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:42.204718    8476 round_trippers.go:574] Response Status: 200 OK in 186 milliseconds
	I0219 04:00:42.204718    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:42.204718    8476 round_trippers.go:580]     Audit-Id: 2299df17-b4b2-4eee-896a-3faaf10da98a
	I0219 04:00:42.204718    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:42.204718    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:42.204718    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:42.204718    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:42.204718    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:42 GMT
	I0219 04:00:42.205216    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:42.523819    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:42.523819    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:42.523904    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:42.523904    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:42.531967    8476 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0219 04:00:42.531967    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:42.531967    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:42.531967    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:42.531967    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:42 GMT
	I0219 04:00:42.531967    8476 round_trippers.go:580]     Audit-Id: 4e7625fc-eb35-4634-a076-3179fae17a05
	I0219 04:00:42.531967    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:42.531967    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:42.531967    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:42.532751    8476 node_ready.go:58] node "multinode-657900" has status "Ready":"False"
	I0219 04:00:43.010878    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:43.010878    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:43.010878    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:43.010878    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:43.018864    8476 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:00:43.018864    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:43.018864    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:43.018864    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:43.018864    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:43.018864    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:43.018864    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:43 GMT
	I0219 04:00:43.018864    8476 round_trippers.go:580]     Audit-Id: 39a102e2-c6db-4d3a-901e-9401445f8c41
	I0219 04:00:43.018864    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"323","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0219 04:00:43.516927    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:43.516927    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:43.516927    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:43.516927    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:43.519589    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:43.519589    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:43.520612    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:43.520612    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:43.520612    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:43 GMT
	I0219 04:00:43.520612    8476 round_trippers.go:580]     Audit-Id: fbf92a74-d969-4ad1-b174-4ca5b447913b
	I0219 04:00:43.520612    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:43.520612    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:43.520809    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:43.521132    8476 node_ready.go:49] node "multinode-657900" has status "Ready":"True"
	I0219 04:00:43.521132    8476 node_ready.go:38] duration metric: took 9.5212437s waiting for node "multinode-657900" to be "Ready" ...
	I0219 04:00:43.521132    8476 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:00:43.521132    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods
	I0219 04:00:43.521132    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:43.521132    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:43.521132    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:43.524728    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:43.525310    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:43.525310    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:43 GMT
	I0219 04:00:43.525310    8476 round_trippers.go:580]     Audit-Id: 6e4203ed-e703-4559-bab4-0af5b5e64d5f
	I0219 04:00:43.525310    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:43.525310    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:43.525310    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:43.525412    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:43.526166    8476 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"396","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53960 chars]
	I0219 04:00:43.530931    8476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:43.531057    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:00:43.531057    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:43.531130    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:43.531130    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:43.534164    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:43.534164    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:43.534164    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:43.534164    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:43.534164    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:43 GMT
	I0219 04:00:43.534164    8476 round_trippers.go:580]     Audit-Id: 08f43f2c-2c57-4ccb-b114-a153ccad539b
	I0219 04:00:43.534164    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:43.534164    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:43.534164    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"396","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0219 04:00:43.535182    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:43.535182    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:43.535182    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:43.535182    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:43.538188    8476 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:00:43.538188    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:43.538188    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:43 GMT
	I0219 04:00:43.538188    8476 round_trippers.go:580]     Audit-Id: cba1f891-c573-4b24-b6e4-a93718fdbf36
	I0219 04:00:43.538188    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:43.538188    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:43.538188    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:43.538188    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:43.538188    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:44.054599    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:00:44.054599    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:44.054689    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:44.054689    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:44.057209    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:44.057209    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:44.057209    8476 round_trippers.go:580]     Audit-Id: 80939c4f-494a-4536-9338-d0a7bc6452cb
	I0219 04:00:44.058018    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:44.058018    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:44.058018    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:44.058018    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:44.058129    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:44 GMT
	I0219 04:00:44.058340    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"396","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0219 04:00:44.059170    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:44.059170    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:44.059170    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:44.059170    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:44.066234    8476 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:00:44.066234    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:44.066234    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:44.066234    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:44.066234    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:44 GMT
	I0219 04:00:44.066234    8476 round_trippers.go:580]     Audit-Id: be315f0d-d85f-4b12-bdcc-4147bf018030
	I0219 04:00:44.066234    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:44.066234    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:44.066234    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:44.549585    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:00:44.549646    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:44.549646    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:44.549693    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:44.552777    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:44.552777    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:44.553005    8476 round_trippers.go:580]     Audit-Id: 742aea7a-04c2-40c8-940b-8030f765c0c9
	I0219 04:00:44.553005    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:44.553046    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:44.553046    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:44.553046    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:44.553046    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:44 GMT
	I0219 04:00:44.553379    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"396","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0219 04:00:44.553996    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:44.553996    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:44.553996    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:44.553996    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:44.556603    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:44.556603    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:44.556603    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:44.556603    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:44.556603    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:44 GMT
	I0219 04:00:44.556603    8476 round_trippers.go:580]     Audit-Id: 9ad8d012-0117-4651-9bdf-e7716bd921a2
	I0219 04:00:44.556603    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:44.556603    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:44.556603    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:45.039741    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:00:45.039741    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:45.039741    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:45.039864    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:45.043653    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:45.043653    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:45.043653    8476 round_trippers.go:580]     Audit-Id: ab35e0dc-acd4-4f00-b935-a0cbd499fcc0
	I0219 04:00:45.044322    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:45.044322    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:45.044322    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:45.044322    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:45.044322    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:45 GMT
	I0219 04:00:45.044567    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"396","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0219 04:00:45.045198    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:45.045280    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:45.045280    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:45.045280    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:45.047505    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:45.047505    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:45.047505    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:45.047952    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:45.047952    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:45.047952    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:45 GMT
	I0219 04:00:45.047952    8476 round_trippers.go:580]     Audit-Id: 4985d887-411e-4741-a344-563b8476e4d5
	I0219 04:00:45.048046    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:45.048328    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:45.540200    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:00:45.540294    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:45.540294    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:45.540294    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:45.543633    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:45.543633    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:45.544065    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:45.544065    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:45.544065    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:45.544130    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:45 GMT
	I0219 04:00:45.544130    8476 round_trippers.go:580]     Audit-Id: 0d513f73-24da-471e-9597-69b066851c95
	I0219 04:00:45.544130    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:45.544274    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"396","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0219 04:00:45.544993    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:45.544993    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:45.544993    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:45.545063    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:45.547214    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:45.547214    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:45.547626    8476 round_trippers.go:580]     Audit-Id: 2431cc5c-2ca9-4da3-8a5c-3e8ead074ea5
	I0219 04:00:45.547626    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:45.547626    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:45.547626    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:45.547626    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:45.547690    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:45 GMT
	I0219 04:00:45.548144    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:45.548466    8476 pod_ready.go:102] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"False"
	I0219 04:00:46.044080    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:00:46.044080    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.044080    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.044080    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.047991    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:46.047991    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.047991    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.047991    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.047991    8476 round_trippers.go:580]     Audit-Id: 3b6f8d96-ef50-42db-8894-0f3a395ee8d8
	I0219 04:00:46.047991    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.048899    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.048899    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.049127    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"412","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0219 04:00:46.049904    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:46.049926    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.049926    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.049926    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.052963    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:46.052963    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.052963    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.053507    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.053507    8476 round_trippers.go:580]     Audit-Id: effc9539-4818-4da4-bd75-14b4b0b3d9ec
	I0219 04:00:46.053507    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.053507    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.053507    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.053856    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:46.054269    8476 pod_ready.go:92] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"True"
	I0219 04:00:46.054269    8476 pod_ready.go:81] duration metric: took 2.5232916s waiting for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.054269    8476 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.054414    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-657900
	I0219 04:00:46.054414    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.054414    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.054414    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.060669    8476 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:00:46.060669    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.060768    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.060768    8476 round_trippers.go:580]     Audit-Id: 039e4361-86cc-4076-8005-628fc02269c0
	I0219 04:00:46.060768    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.060768    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.060768    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.060768    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.060768    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-657900","namespace":"kube-system","uid":"a9bb99b7-a011-4c3a-b705-922abff5b9d9","resourceVersion":"266","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.246.233:2379","kubernetes.io/config.hash":"b8463a9c9ed8ec609365197de83e82b6","kubernetes.io/config.mirror":"b8463a9c9ed8ec609365197de83e82b6","kubernetes.io/config.seen":"2023-02-19T04:00:19.445277145Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5856 chars]
	I0219 04:00:46.060768    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:46.060768    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.060768    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.060768    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.063756    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:46.063756    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.063756    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.063756    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.063756    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.063756    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.063756    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.063756    8476 round_trippers.go:580]     Audit-Id: 7893918e-c53d-431d-823d-1df18fd32ebc
	I0219 04:00:46.063756    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:46.065620    8476 pod_ready.go:92] pod "etcd-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:00:46.065620    8476 pod_ready.go:81] duration metric: took 11.3512ms waiting for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.065620    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.065620    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-657900
	I0219 04:00:46.065620    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.065620    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.065620    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.067830    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:46.067830    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.067830    8476 round_trippers.go:580]     Audit-Id: d853b304-7977-44f6-9254-c32942aabdbc
	I0219 04:00:46.067830    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.068868    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.068868    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.068899    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.068923    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.069220    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-657900","namespace":"kube-system","uid":"9e6fb2d2-5c86-496f-a76c-9c0c6f92080e","resourceVersion":"270","creationTimestamp":"2023-02-19T04:00:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.246.233:8443","kubernetes.io/config.hash":"1ff63a085e26860683ab640202bbdd7b","kubernetes.io/config.mirror":"1ff63a085e26860683ab640202bbdd7b","kubernetes.io/config.seen":"2023-02-19T04:00:05.502958001Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
	I0219 04:00:46.069745    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:46.069822    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.069822    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.069876    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.072163    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:46.072163    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.072163    8476 round_trippers.go:580]     Audit-Id: 1ecc1757-ee74-4bc7-a318-a8eeaf25793f
	I0219 04:00:46.072163    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.072163    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.072163    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.072163    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.072163    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.073167    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:46.073660    8476 pod_ready.go:92] pod "kube-apiserver-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:00:46.073660    8476 pod_ready.go:81] duration metric: took 8.0403ms waiting for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.073723    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.073817    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-657900
	I0219 04:00:46.073817    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.073939    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.073939    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.076380    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:46.076964    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.077057    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.077057    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.077057    8476 round_trippers.go:580]     Audit-Id: 764b8dc4-450e-42b9-a0e9-c66896336419
	I0219 04:00:46.077057    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.077057    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.077057    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.077962    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-657900","namespace":"kube-system","uid":"463b901e-dd04-46fc-91a3-9917b12590ff","resourceVersion":"294","creationTimestamp":"2023-02-19T04:00:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.mirror":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.seen":"2023-02-19T04:00:19.445306645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6957 chars]
	I0219 04:00:46.078995    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:46.078995    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.078995    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.079098    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.085687    8476 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:00:46.085687    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.085687    8476 round_trippers.go:580]     Audit-Id: 98ac3bdb-c409-42d0-b9d0-da71f038b64b
	I0219 04:00:46.085687    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.085687    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.085687    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.085687    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.086479    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.086479    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:46.086479    8476 pod_ready.go:92] pod "kube-controller-manager-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:00:46.086479    8476 pod_ready.go:81] duration metric: took 12.7558ms waiting for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.086479    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.086479    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:00:46.086479    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.086479    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.086479    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.089843    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:46.089843    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.089843    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.089843    8476 round_trippers.go:580]     Audit-Id: 233a375a-571b-45e4-8c67-fd483980e82a
	I0219 04:00:46.089843    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.089843    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.089843    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.089843    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.090848    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kcm8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa","resourceVersion":"383","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0219 04:00:46.091380    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:46.091454    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.091454    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.091527    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.093679    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:00:46.093679    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.093679    8476 round_trippers.go:580]     Audit-Id: 0429841d-30d5-4e8d-a342-59780702010a
	I0219 04:00:46.093679    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.094107    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.094107    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.094107    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.094170    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.094449    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:46.094900    8476 pod_ready.go:92] pod "kube-proxy-kcm8m" in "kube-system" namespace has status "Ready":"True"
	I0219 04:00:46.094900    8476 pod_ready.go:81] duration metric: took 8.4209ms waiting for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.094968    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.247737    8476 request.go:622] Waited for 152.5862ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:00:46.247901    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:00:46.247963    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.247963    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.247963    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.251383    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:46.252059    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.252059    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.252059    8476 round_trippers.go:580]     Audit-Id: c5e19a34-759a-42e9-b09f-0335bb51b315
	I0219 04:00:46.252059    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.252059    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.252059    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.252059    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.252398    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-657900","namespace":"kube-system","uid":"ba38eff9-ab82-463a-bb6f-8af5e4599f15","resourceVersion":"267","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.mirror":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.seen":"2023-02-19T04:00:19.445308045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4687 chars]
	I0219 04:00:46.449176    8476 request.go:622] Waited for 196.0506ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:46.449499    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:00:46.449499    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.449499    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.449499    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.452991    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:46.452991    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.452991    8476 round_trippers.go:580]     Audit-Id: 7bd52251-2329-41c9-80b7-79af380445ae
	I0219 04:00:46.452991    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.452991    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.452991    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.452991    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.453507    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.453876    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0219 04:00:46.454396    8476 pod_ready.go:92] pod "kube-scheduler-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:00:46.454396    8476 pod_ready.go:81] duration metric: took 359.4299ms waiting for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:00:46.454396    8476 pod_ready.go:38] duration metric: took 2.9332735s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:00:46.454467    8476 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:00:46.464358    8476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:00:46.486855    8476 command_runner.go:130] > 1951
	I0219 04:00:46.486900    8476 api_server.go:71] duration metric: took 12.8568583s to wait for apiserver process to appear ...
	I0219 04:00:46.486929    8476 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:00:46.486929    8476 api_server.go:252] Checking apiserver healthz at https://172.28.246.233:8443/healthz ...
	I0219 04:00:46.494596    8476 api_server.go:278] https://172.28.246.233:8443/healthz returned 200:
	ok
	I0219 04:00:46.495735    8476 round_trippers.go:463] GET https://172.28.246.233:8443/version
	I0219 04:00:46.495735    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.495735    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.495735    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.497074    8476 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:00:46.497074    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.497074    8476 round_trippers.go:580]     Audit-Id: c12b0596-1bc0-40b0-aade-5d2b75e8ad2f
	I0219 04:00:46.497074    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.497074    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.497074    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.497074    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.497074    8476 round_trippers.go:580]     Content-Length: 263
	I0219 04:00:46.497074    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.497650    8476 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0219 04:00:46.497751    8476 api_server.go:140] control plane version: v1.26.1
	I0219 04:00:46.497751    8476 api_server.go:130] duration metric: took 10.8218ms to wait for apiserver health ...
	I0219 04:00:46.497751    8476 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:00:46.650895    8476 request.go:622] Waited for 152.6377ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods
	I0219 04:00:46.650998    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods
	I0219 04:00:46.650998    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.651128    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.651128    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.656496    8476 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:00:46.656545    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.656545    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.656642    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.656642    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.656642    8476 round_trippers.go:580]     Audit-Id: 5d815874-cb35-4a49-8432-cf80327ca4d8
	I0219 04:00:46.656642    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.656642    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.658095    8476 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"412","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54053 chars]
	I0219 04:00:46.661310    8476 system_pods.go:59] 8 kube-system pods found
	I0219 04:00:46.661376    8476 system_pods.go:61] "coredns-787d4945fb-9mvfg" [38bce706-085e-44e0-bf5e-97cbdebb682e] Running
	I0219 04:00:46.661376    8476 system_pods.go:61] "etcd-multinode-657900" [a9bb99b7-a011-4c3a-b705-922abff5b9d9] Running
	I0219 04:00:46.661376    8476 system_pods.go:61] "kindnet-lvjng" [df7a9269-516f-4b66-af0f-429b21ee31cc] Running
	I0219 04:00:46.661376    8476 system_pods.go:61] "kube-apiserver-multinode-657900" [9e6fb2d2-5c86-496f-a76c-9c0c6f92080e] Running
	I0219 04:00:46.661376    8476 system_pods.go:61] "kube-controller-manager-multinode-657900" [463b901e-dd04-46fc-91a3-9917b12590ff] Running
	I0219 04:00:46.661430    8476 system_pods.go:61] "kube-proxy-kcm8m" [8ce14b4f-6df3-4822-ac2b-06f3417e8eaa] Running
	I0219 04:00:46.661430    8476 system_pods.go:61] "kube-scheduler-multinode-657900" [ba38eff9-ab82-463a-bb6f-8af5e4599f15] Running
	I0219 04:00:46.661430    8476 system_pods.go:61] "storage-provisioner" [4fcb063a-be6a-41e8-9379-c8f7cf16a165] Running
	I0219 04:00:46.661430    8476 system_pods.go:74] duration metric: took 163.6794ms to wait for pod list to return data ...
	I0219 04:00:46.661466    8476 default_sa.go:34] waiting for default service account to be created ...
	I0219 04:00:46.852157    8476 request.go:622] Waited for 190.6409ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/namespaces/default/serviceaccounts
	I0219 04:00:46.852157    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/default/serviceaccounts
	I0219 04:00:46.852157    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:46.852157    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:46.852157    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:46.855864    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:46.855864    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:46.855864    8476 round_trippers.go:580]     Audit-Id: b404701d-a2e9-4e40-b599-74510687562d
	I0219 04:00:46.855864    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:46.855864    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:46.856876    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:46.856876    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:46.856929    8476 round_trippers.go:580]     Content-Length: 261
	I0219 04:00:46.856929    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:46 GMT
	I0219 04:00:46.856929    8476 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ddbec5b6-816c-4d34-aa55-cd3b12c88d54","resourceVersion":"320","creationTimestamp":"2023-02-19T04:00:32Z"}}]}
	I0219 04:00:46.857350    8476 default_sa.go:45] found service account: "default"
	I0219 04:00:46.857350    8476 default_sa.go:55] duration metric: took 195.885ms for default service account to be created ...
	I0219 04:00:46.857350    8476 system_pods.go:116] waiting for k8s-apps to be running ...
	I0219 04:00:47.055287    8476 request.go:622] Waited for 197.5357ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods
	I0219 04:00:47.055458    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods
	I0219 04:00:47.055563    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:47.055563    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:47.055563    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:47.061068    8476 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:00:47.061217    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:47.061217    8476 round_trippers.go:580]     Audit-Id: cc9e0fd1-173e-47e7-8a9c-f9cd415368b4
	I0219 04:00:47.061217    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:47.061217    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:47.061303    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:47.061303    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:47.061303    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:47 GMT
	I0219 04:00:47.062120    8476 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"412","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54053 chars]
	I0219 04:00:47.065455    8476 system_pods.go:86] 8 kube-system pods found
	I0219 04:00:47.065510    8476 system_pods.go:89] "coredns-787d4945fb-9mvfg" [38bce706-085e-44e0-bf5e-97cbdebb682e] Running
	I0219 04:00:47.065510    8476 system_pods.go:89] "etcd-multinode-657900" [a9bb99b7-a011-4c3a-b705-922abff5b9d9] Running
	I0219 04:00:47.065510    8476 system_pods.go:89] "kindnet-lvjng" [df7a9269-516f-4b66-af0f-429b21ee31cc] Running
	I0219 04:00:47.065594    8476 system_pods.go:89] "kube-apiserver-multinode-657900" [9e6fb2d2-5c86-496f-a76c-9c0c6f92080e] Running
	I0219 04:00:47.065594    8476 system_pods.go:89] "kube-controller-manager-multinode-657900" [463b901e-dd04-46fc-91a3-9917b12590ff] Running
	I0219 04:00:47.065594    8476 system_pods.go:89] "kube-proxy-kcm8m" [8ce14b4f-6df3-4822-ac2b-06f3417e8eaa] Running
	I0219 04:00:47.065594    8476 system_pods.go:89] "kube-scheduler-multinode-657900" [ba38eff9-ab82-463a-bb6f-8af5e4599f15] Running
	I0219 04:00:47.065594    8476 system_pods.go:89] "storage-provisioner" [4fcb063a-be6a-41e8-9379-c8f7cf16a165] Running
	I0219 04:00:47.065594    8476 system_pods.go:126] duration metric: took 208.156ms to wait for k8s-apps to be running ...
	I0219 04:00:47.065686    8476 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:00:47.075834    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:00:47.098535    8476 system_svc.go:56] duration metric: took 32.9417ms WaitForService to wait for kubelet.
	I0219 04:00:47.098953    8476 kubeadm.go:578] duration metric: took 13.4689135s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:00:47.098953    8476 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:00:47.258390    8476 request.go:622] Waited for 159.1646ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/nodes
	I0219 04:00:47.258763    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes
	I0219 04:00:47.258763    8476 round_trippers.go:469] Request Headers:
	I0219 04:00:47.258840    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:00:47.258840    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:00:47.262238    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:00:47.262338    8476 round_trippers.go:577] Response Headers:
	I0219 04:00:47.262338    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:00:47.262338    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:00:47 GMT
	I0219 04:00:47.262338    8476 round_trippers.go:580]     Audit-Id: b114deea-b860-43e2-a1fc-fa1d74aba6f3
	I0219 04:00:47.262402    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:00:47.262402    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:00:47.262402    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:00:47.262542    8476 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"391","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5006 chars]
	I0219 04:00:47.263200    8476 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:00:47.263252    8476 node_conditions.go:123] node cpu capacity is 2
	I0219 04:00:47.263327    8476 node_conditions.go:105] duration metric: took 164.2509ms to run NodePressure ...
	I0219 04:00:47.263327    8476 start.go:228] waiting for startup goroutines ...
	I0219 04:00:47.263327    8476 start.go:233] waiting for cluster config update ...
	I0219 04:00:47.263380    8476 start.go:242] writing updated cluster config ...
	I0219 04:00:47.267943    8476 out.go:177] 
	I0219 04:00:47.274822    8476 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:00:47.274982    8476 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:00:47.280660    8476 out.go:177] * Starting worker node multinode-657900-m02 in cluster multinode-657900
	I0219 04:00:47.284766    8476 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:00:47.284766    8476 cache.go:57] Caching tarball of preloaded images
	I0219 04:00:47.285391    8476 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:00:47.285608    8476 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 04:00:47.285679    8476 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:00:47.287478    8476 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:00:47.287478    8476 start.go:364] acquiring machines lock for multinode-657900-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:00:47.288056    8476 start.go:368] acquired machines lock for "multinode-657900-m02" in 578.7µs
	I0219 04:00:47.288226    8476 start.go:93] Provisioning new machine with config: &{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.233 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0219 04:00:47.288226    8476 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0219 04:00:47.293084    8476 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0219 04:00:47.294040    8476 start.go:159] libmachine.API.Create for "multinode-657900" (driver="hyperv")
	I0219 04:00:47.294218    8476 client.go:168] LocalClient.Create starting
	I0219 04:00:47.294348    8476 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:00:47.294348    8476 main.go:141] libmachine: Decoding PEM data...
	I0219 04:00:47.294348    8476 main.go:141] libmachine: Parsing certificate...
	I0219 04:00:47.294872    8476 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:00:47.295032    8476 main.go:141] libmachine: Decoding PEM data...
	I0219 04:00:47.295032    8476 main.go:141] libmachine: Parsing certificate...
	I0219 04:00:47.295188    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:00:47.700478    8476 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:00:47.700600    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:47.700725    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:00:48.352272    8476 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:00:48.352345    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:48.352399    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:00:48.848071    8476 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:00:48.848306    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:48.848306    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:00:50.346538    8476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:00:50.346538    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:50.348991    8476 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:00:50.756955    8476 main.go:141] libmachine: Creating SSH key...
	I0219 04:00:51.003630    8476 main.go:141] libmachine: Creating VM...
	I0219 04:00:51.003630    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:00:52.396495    8476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:00:52.396495    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:52.396692    8476 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:00:52.396736    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:00:53.044762    8476 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:00:53.044762    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:53.044762    8476 main.go:141] libmachine: Creating VHD
	I0219 04:00:53.044762    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:00:54.749902    8476 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 73991589-2572-4790-878D-0932C27F42D0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:00:54.750071    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:54.750071    8476 main.go:141] libmachine: Writing magic tar header
	I0219 04:00:54.750155    8476 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:00:54.757244    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 04:00:56.469054    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:00:56.469281    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:56.469281    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\disk.vhd' -SizeBytes 20000MB
	I0219 04:00:57.631922    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:00:57.631973    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:57.632051    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-657900-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0219 04:00:59.562098    8476 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-657900-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 04:00:59.562240    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:00:59.562240    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-657900-m02 -DynamicMemoryEnabled $false
	I0219 04:01:00.354672    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:00.354672    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:00.354672    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-657900-m02 -Count 2
	I0219 04:01:01.065755    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:01.065755    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:01.065834    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-657900-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\boot2docker.iso'
	I0219 04:01:02.139979    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:02.140231    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:02.140420    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-657900-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\disk.vhd'
	I0219 04:01:03.318429    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:03.318429    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:03.318429    8476 main.go:141] libmachine: Starting VM...
	I0219 04:01:03.318429    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-657900-m02
	I0219 04:01:04.976940    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:04.976940    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:04.976940    8476 main.go:141] libmachine: Waiting for host to start...
	I0219 04:01:04.977015    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:05.722042    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:05.722042    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:05.722042    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:06.755984    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:06.755984    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:07.759031    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:08.476749    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:08.476749    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:08.476749    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:09.460212    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:09.460388    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:10.474537    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:11.187856    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:11.187856    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:11.187856    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:12.190568    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:12.190568    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:13.202149    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:13.949467    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:13.949614    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:13.949674    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:14.981330    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:14.981403    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:15.983131    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:16.731034    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:16.731034    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:16.731128    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:17.733120    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:17.733189    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:18.744826    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:19.490633    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:19.490633    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:19.490633    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:20.490591    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:20.490635    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:21.492160    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:22.197782    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:22.197782    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:22.197967    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:23.211005    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:23.211005    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:24.215992    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:24.915625    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:24.915853    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:24.915905    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:25.952970    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:25.953006    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:26.953980    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:27.666285    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:27.666285    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:27.666285    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:28.683424    8476 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:01:28.683424    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:29.699087    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:30.458796    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:30.458796    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:30.458929    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:31.528853    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:31.528931    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:31.529016    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:32.338384    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:32.338384    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:32.338466    8476 machine.go:88] provisioning docker machine ...
	I0219 04:01:32.338466    8476 buildroot.go:166] provisioning hostname "multinode-657900-m02"
	I0219 04:01:32.338732    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:33.115771    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:33.115835    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:33.115835    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:34.207952    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:34.207952    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:34.210952    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 04:01:34.220424    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.228 22 <nil> <nil>}
	I0219 04:01:34.220487    8476 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-657900-m02 && echo "multinode-657900-m02" | sudo tee /etc/hostname
	I0219 04:01:34.393175    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-657900-m02
	
	I0219 04:01:34.393175    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:35.090366    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:35.090536    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:35.090536    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:36.123916    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:36.123916    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:36.126886    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 04:01:36.126886    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.228 22 <nil> <nil>}
	I0219 04:01:36.126886    8476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-657900-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-657900-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-657900-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:01:36.304239    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:01:36.304239    8476 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:01:36.304239    8476 buildroot.go:174] setting up certificates
	I0219 04:01:36.304239    8476 provision.go:83] configureAuth start
	I0219 04:01:36.304239    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:37.024144    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:37.024144    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:37.024217    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:38.085843    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:38.086036    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:38.086126    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:38.813762    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:38.813762    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:38.813762    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:39.874440    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:39.874440    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:39.874498    8476 provision.go:138] copyHostCerts
	I0219 04:01:39.874883    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0219 04:01:39.875279    8476 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:01:39.875320    8476 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:01:39.875827    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:01:39.876486    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0219 04:01:39.877140    8476 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:01:39.877140    8476 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:01:39.877201    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:01:39.878556    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0219 04:01:39.878556    8476 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:01:39.879083    8476 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:01:39.879231    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:01:39.880549    8476 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-657900-m02 san=[172.28.248.228 172.28.248.228 localhost 127.0.0.1 minikube multinode-657900-m02]
	I0219 04:01:40.055653    8476 provision.go:172] copyRemoteCerts
	I0219 04:01:40.067135    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:01:40.067135    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:40.796651    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:40.796870    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:40.796976    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:41.853061    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:41.853061    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:41.853541    8476 sshutil.go:53] new ssh client: &{IP:172.28.248.228 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:01:41.976547    8476 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9094182s)
	I0219 04:01:41.976547    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0219 04:01:41.976547    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:01:42.017487    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0219 04:01:42.017952    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0219 04:01:42.062505    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0219 04:01:42.062505    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:01:42.105980    8476 provision.go:86] duration metric: configureAuth took 5.8017591s
	I0219 04:01:42.105980    8476 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:01:42.106692    8476 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:01:42.106692    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:42.863823    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:42.863878    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:42.863878    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:43.906181    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:43.906181    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:43.911388    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 04:01:43.912219    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.228 22 <nil> <nil>}
	I0219 04:01:43.912219    8476 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:01:44.074735    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:01:44.074735    8476 buildroot.go:70] root file system type: tmpfs
	I0219 04:01:44.074735    8476 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:01:44.074735    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:44.802431    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:44.802683    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:44.802881    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:45.827617    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:45.827837    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:45.833345    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 04:01:45.834310    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.228 22 <nil> <nil>}
	I0219 04:01:45.834310    8476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.246.233"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:01:46.014116    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.246.233
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:01:46.014116    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:46.750475    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:46.750475    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:46.750563    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:47.826989    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:47.826989    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:47.832572    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 04:01:47.833306    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.228 22 <nil> <nil>}
	I0219 04:01:47.833306    8476 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:01:48.912997    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:01:48.912997    8476 machine.go:91] provisioned docker machine in 16.574584s
	I0219 04:01:48.912997    8476 client.go:171] LocalClient.Create took 1m1.6189762s
	I0219 04:01:48.912997    8476 start.go:167] duration metric: libmachine.API.Create for "multinode-657900" took 1m1.6191536s
	I0219 04:01:48.912997    8476 start.go:300] post-start starting for "multinode-657900-m02" (driver="hyperv")
	I0219 04:01:48.912997    8476 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:01:48.924023    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:01:48.924023    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:49.646915    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:49.646915    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:49.647105    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:50.676868    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:50.677025    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:50.677361    8476 sshutil.go:53] new ssh client: &{IP:172.28.248.228 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:01:50.802267    8476 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8782503s)
	I0219 04:01:50.812618    8476 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:01:50.819569    8476 command_runner.go:130] > NAME=Buildroot
	I0219 04:01:50.819569    8476 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0219 04:01:50.819569    8476 command_runner.go:130] > ID=buildroot
	I0219 04:01:50.819669    8476 command_runner.go:130] > VERSION_ID=2021.02.12
	I0219 04:01:50.819669    8476 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0219 04:01:50.819669    8476 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:01:50.819669    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:01:50.819669    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:01:50.820523    8476 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:01:50.820523    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /etc/ssl/certs/101482.pem
	I0219 04:01:50.831798    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:01:50.848778    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:01:50.891363    8476 start.go:303] post-start completed in 1.978373s
	I0219 04:01:50.894187    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:51.620340    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:51.620340    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:51.620340    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:52.678103    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:52.678291    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:52.678477    8476 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:01:52.681291    8476 start.go:128] duration metric: createHost completed in 1m5.3932114s
	I0219 04:01:52.681352    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:53.416258    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:53.416331    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:53.416503    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:54.444975    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:54.445038    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:54.448338    8476 main.go:141] libmachine: Using SSH client type: native
	I0219 04:01:54.449728    8476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.228 22 <nil> <nil>}
	I0219 04:01:54.449728    8476 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:01:54.608091    8476 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676779314.602059946
	
	I0219 04:01:54.608091    8476 fix.go:207] guest clock: 1676779314.602059946
	I0219 04:01:54.608091    8476 fix.go:220] Guest: 2023-02-19 04:01:54.602059946 +0000 GMT Remote: 2023-02-19 04:01:52.6812915 +0000 GMT m=+205.164752401 (delta=1.920768446s)
	I0219 04:01:54.608091    8476 fix.go:191] guest clock delta is within tolerance: 1.920768446s
	I0219 04:01:54.608091    8476 start.go:83] releasing machines lock for "multinode-657900-m02", held for 1m7.3202505s
	I0219 04:01:54.608091    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:55.344259    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:55.344398    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:55.344398    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:56.387363    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:56.387363    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:56.391610    8476 out.go:177] * Found network options:
	I0219 04:01:56.394982    8476 out.go:177]   - NO_PROXY=172.28.246.233
	W0219 04:01:56.397726    8476 proxy.go:119] fail to check proxy env: Error ip not in block
	I0219 04:01:56.400504    8476 out.go:177]   - no_proxy=172.28.246.233
	W0219 04:01:56.402749    8476 proxy.go:119] fail to check proxy env: Error ip not in block
	W0219 04:01:56.404970    8476 proxy.go:119] fail to check proxy env: Error ip not in block
	I0219 04:01:56.406844    8476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:01:56.406844    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:56.414078    8476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0219 04:01:56.414078    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:01:57.165214    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:57.165317    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:57.165514    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:57.172786    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:01:57.172786    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:57.172786    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:01:58.294621    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:58.294803    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:58.295110    8476 sshutil.go:53] new ssh client: &{IP:172.28.248.228 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:01:58.324585    8476 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:01:58.325247    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:01:58.325639    8476 sshutil.go:53] new ssh client: &{IP:172.28.248.228 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:01:58.394938    8476 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0219 04:01:58.395390    8476 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (1.9812448s)
	W0219 04:01:58.395458    8476 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:01:58.405359    8476 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:01:58.495068    8476 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0219 04:01:58.495172    8476 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.0883344s)
	I0219 04:01:58.495172    8476 command_runner.go:130] > /usr/bin/cri-dockerd
	I0219 04:01:58.505255    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:01:58.522248    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:01:58.561702    8476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:01:58.586590    8476 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0219 04:01:58.586709    8476 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:01:58.586709    8476 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:01:58.594554    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:01:58.626715    8476 docker.go:630] Got preloaded images: 
	I0219 04:01:58.626715    8476 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:01:58.638354    8476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:01:58.655396    8476 command_runner.go:139] > {"Repositories":{}}
	I0219 04:01:58.665330    8476 ssh_runner.go:195] Run: which lz4
	I0219 04:01:58.672404    8476 command_runner.go:130] > /usr/bin/lz4
	I0219 04:01:58.672404    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0219 04:01:58.683234    8476 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:01:58.688151    8476 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:01:58.689161    8476 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:01:58.689375    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:02:01.706581    8476 docker.go:594] Took 3.034035 seconds to copy over tarball
	I0219 04:02:01.717165    8476 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:02:10.769696    8476 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.052335s)
	I0219 04:02:10.769696    8476 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:02:10.842723    8476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:02:10.859548    8476 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.3":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.6-0":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c":"sha256:fce326961ae2d51a5f726883fd59d
2a8c2ccc3e45d3bb859882db58e422e59e7"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed0
3c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0219 04:02:10.859548    8476 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:02:10.897461    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:02:11.077640    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:02:13.961324    8476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8836283s)
	I0219 04:02:13.961448    8476 start.go:485] detecting cgroup driver to use...
	I0219 04:02:13.961643    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:02:13.992341    8476 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:02:13.993190    8476 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:02:14.004701    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:02:14.036427    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:02:14.054384    8476 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:02:14.065026    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:02:14.098489    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:02:14.136906    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:02:14.165265    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:02:14.194207    8476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:02:14.225191    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:02:14.253775    8476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:02:14.270662    8476 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0219 04:02:14.282594    8476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:02:14.318280    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:02:14.492972    8476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:02:14.522039    8476 start.go:485] detecting cgroup driver to use...
	I0219 04:02:14.532659    8476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:02:14.550835    8476 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0219 04:02:14.550975    8476 command_runner.go:130] > [Unit]
	I0219 04:02:14.550975    8476 command_runner.go:130] > Description=Docker Application Container Engine
	I0219 04:02:14.550975    8476 command_runner.go:130] > Documentation=https://docs.docker.com
	I0219 04:02:14.550975    8476 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0219 04:02:14.550975    8476 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0219 04:02:14.550975    8476 command_runner.go:130] > StartLimitBurst=3
	I0219 04:02:14.550975    8476 command_runner.go:130] > StartLimitIntervalSec=60
	I0219 04:02:14.550975    8476 command_runner.go:130] > [Service]
	I0219 04:02:14.550975    8476 command_runner.go:130] > Type=notify
	I0219 04:02:14.551162    8476 command_runner.go:130] > Restart=on-failure
	I0219 04:02:14.551162    8476 command_runner.go:130] > Environment=NO_PROXY=172.28.246.233
	I0219 04:02:14.551162    8476 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0219 04:02:14.551162    8476 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0219 04:02:14.551162    8476 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0219 04:02:14.551162    8476 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0219 04:02:14.551162    8476 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0219 04:02:14.551162    8476 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0219 04:02:14.551162    8476 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0219 04:02:14.551162    8476 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0219 04:02:14.551293    8476 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0219 04:02:14.551293    8476 command_runner.go:130] > ExecStart=
	I0219 04:02:14.551293    8476 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0219 04:02:14.551293    8476 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0219 04:02:14.551293    8476 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0219 04:02:14.551293    8476 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0219 04:02:14.551293    8476 command_runner.go:130] > LimitNOFILE=infinity
	I0219 04:02:14.551293    8476 command_runner.go:130] > LimitNPROC=infinity
	I0219 04:02:14.551293    8476 command_runner.go:130] > LimitCORE=infinity
	I0219 04:02:14.551411    8476 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0219 04:02:14.551411    8476 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0219 04:02:14.551411    8476 command_runner.go:130] > TasksMax=infinity
	I0219 04:02:14.551411    8476 command_runner.go:130] > TimeoutStartSec=0
	I0219 04:02:14.551411    8476 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0219 04:02:14.551411    8476 command_runner.go:130] > Delegate=yes
	I0219 04:02:14.551411    8476 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0219 04:02:14.551411    8476 command_runner.go:130] > KillMode=process
	I0219 04:02:14.551411    8476 command_runner.go:130] > [Install]
	I0219 04:02:14.551538    8476 command_runner.go:130] > WantedBy=multi-user.target
	I0219 04:02:14.562209    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:02:14.591062    8476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:02:14.635279    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:02:14.670317    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:02:14.704531    8476 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:02:14.764602    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:02:14.793565    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:02:14.822492    8476 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:02:14.822492    8476 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:02:14.834586    8476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:02:15.028173    8476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:02:15.213721    8476 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:02:15.213721    8476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:02:15.255699    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:02:15.426432    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:02:16.930376    8476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5038467s)
	I0219 04:02:16.938649    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:02:17.098507    8476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:02:17.283140    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:02:17.454655    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:02:17.632735    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:02:17.661772    8476 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:02:17.672830    8476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:02:17.678823    8476 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0219 04:02:17.678823    8476 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0219 04:02:17.678823    8476 command_runner.go:130] > Device: 16h/22d	Inode: 887         Links: 1
	I0219 04:02:17.678823    8476 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0219 04:02:17.678823    8476 command_runner.go:130] > Access: 2023-02-19 04:02:17.646358070 +0000
	I0219 04:02:17.678823    8476 command_runner.go:130] > Modify: 2023-02-19 04:02:17.646358070 +0000
	I0219 04:02:17.678823    8476 command_runner.go:130] > Change: 2023-02-19 04:02:17.650357921 +0000
	I0219 04:02:17.678823    8476 command_runner.go:130] >  Birth: -
	I0219 04:02:17.678823    8476 start.go:553] Will wait 60s for crictl version
	I0219 04:02:17.686848    8476 ssh_runner.go:195] Run: which crictl
	I0219 04:02:17.691816    8476 command_runner.go:130] > /usr/bin/crictl
	I0219 04:02:17.700931    8476 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:02:17.848508    8476 command_runner.go:130] > Version:  0.1.0
	I0219 04:02:17.848508    8476 command_runner.go:130] > RuntimeName:  docker
	I0219 04:02:17.848508    8476 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0219 04:02:17.848508    8476 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0219 04:02:17.848508    8476 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:02:17.856240    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:02:17.899032    8476 command_runner.go:130] > 20.10.23
	I0219 04:02:17.906027    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:02:17.948048    8476 command_runner.go:130] > 20.10.23
	I0219 04:02:17.953036    8476 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:02:17.956078    8476 out.go:177]   - env NO_PROXY=172.28.246.233
	I0219 04:02:17.958075    8476 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:02:17.962056    8476 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:02:17.962056    8476 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:02:17.962056    8476 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:02:17.962056    8476 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:02:17.966040    8476 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:02:17.966040    8476 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:02:17.975040    8476 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:02:17.983128    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:02:18.005538    8476 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900 for IP: 172.28.248.228
	I0219 04:02:18.005538    8476 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:02:18.005538    8476 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:02:18.005538    8476 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:02:18.006552    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0219 04:02:18.006552    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0219 04:02:18.006552    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0219 04:02:18.006552    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0219 04:02:18.007541    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:02:18.007541    8476 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:02:18.007541    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:02:18.007541    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:02:18.007541    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:02:18.008825    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:02:18.008825    8476 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:02:18.009389    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /usr/share/ca-certificates/101482.pem
	I0219 04:02:18.009686    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:02:18.009777    8476 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem -> /usr/share/ca-certificates/10148.pem
	I0219 04:02:18.010835    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:02:18.054402    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:02:18.103262    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:02:18.145006    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:02:18.194820    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:02:18.235043    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:02:18.278663    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:02:18.333028    8476 ssh_runner.go:195] Run: openssl version
	I0219 04:02:18.341149    8476 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0219 04:02:18.349854    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:02:18.373868    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:02:18.380388    8476 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:02:18.380388    8476 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:02:18.389187    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:02:18.395114    8476 command_runner.go:130] > 3ec20f2e
	I0219 04:02:18.404150    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:02:18.428240    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:02:18.452205    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:02:18.458230    8476 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:02:18.458230    8476 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:02:18.469148    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:02:18.480284    8476 command_runner.go:130] > b5213941
	I0219 04:02:18.491578    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:02:18.519020    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:02:18.545203    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:02:18.551286    8476 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:02:18.551439    8476 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:02:18.560155    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:02:18.568018    8476 command_runner.go:130] > 51391683
	I0219 04:02:18.576936    8476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:02:18.603013    8476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:02:18.645337    8476 command_runner.go:130] > cgroupfs
	I0219 04:02:18.645470    8476 cni.go:84] Creating CNI manager for ""
	I0219 04:02:18.645470    8476 cni.go:136] 2 nodes found, recommending kindnet
	I0219 04:02:18.645547    8476 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:02:18.645547    8476 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.248.228 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-657900 NodeName:multinode-657900-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.246.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.248.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:02:18.645800    8476 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.248.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-657900-m02"
	  kubeletExtraArgs:
	    node-ip: 172.28.248.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.246.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:02:18.645958    8476 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-657900-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.248.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:02:18.655546    8476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:02:18.672939    8476 command_runner.go:130] > kubeadm
	I0219 04:02:18.672939    8476 command_runner.go:130] > kubectl
	I0219 04:02:18.672939    8476 command_runner.go:130] > kubelet
	I0219 04:02:18.672939    8476 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:02:18.682971    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0219 04:02:18.702271    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (454 bytes)
	I0219 04:02:18.727553    8476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:02:18.765562    8476 ssh_runner.go:195] Run: grep 172.28.246.233	control-plane.minikube.internal$ /etc/hosts
	I0219 04:02:18.770509    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.246.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:02:18.785967    8476 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:02:18.786980    8476 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:02:18.786980    8476 start.go:301] JoinCluster: &{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.233 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.248.228 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:02:18.786980    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0219 04:02:18.786980    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:02:19.525536    8476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:02:19.525536    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:02:19.525669    8476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:02:20.587571    8476 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 04:02:20.587571    8476 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:02:20.587981    8476 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:02:20.811685    8476 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 8csscc.n9o297fhmtnm9ovf --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:02:20.811780    8476 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0": (2.0247119s)
	I0219 04:02:20.811780    8476 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.28.248.228 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0219 04:02:20.811939    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8csscc.n9o297fhmtnm9ovf --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-657900-m02"
	I0219 04:02:21.040164    8476 command_runner.go:130] ! W0219 04:02:21.032547    1450 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:02:21.565598    8476 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:02:23.362032    8476 command_runner.go:130] > [preflight] Running pre-flight checks
	I0219 04:02:23.362032    8476 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0219 04:02:23.362032    8476 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0219 04:02:23.362032    8476 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:02:23.362032    8476 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:02:23.362032    8476 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0219 04:02:23.362032    8476 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0219 04:02:23.362032    8476 command_runner.go:130] > This node has joined the cluster:
	I0219 04:02:23.362032    8476 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0219 04:02:23.362032    8476 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0219 04:02:23.362032    8476 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0219 04:02:23.362032    8476 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8csscc.n9o297fhmtnm9ovf --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-657900-m02": (2.5501009s)
	I0219 04:02:23.362032    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0219 04:02:23.544848    8476 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0219 04:02:23.714209    8476 start.go:303] JoinCluster complete in 4.9272459s
	I0219 04:02:23.714268    8476 cni.go:84] Creating CNI manager for ""
	I0219 04:02:23.714268    8476 cni.go:136] 2 nodes found, recommending kindnet
	I0219 04:02:23.723010    8476 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0219 04:02:23.731761    8476 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0219 04:02:23.731761    8476 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0219 04:02:23.731761    8476 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0219 04:02:23.731761    8476 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0219 04:02:23.731761    8476 command_runner.go:130] > Access: 2023-02-19 03:59:12.267417100 +0000
	I0219 04:02:23.731761    8476 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0219 04:02:23.731761    8476 command_runner.go:130] > Change: 2023-02-19 03:59:03.008000000 +0000
	I0219 04:02:23.731761    8476 command_runner.go:130] >  Birth: -
	I0219 04:02:23.732306    8476 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0219 04:02:23.732306    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0219 04:02:23.779907    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0219 04:02:24.092432    8476 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:02:24.092504    8476 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:02:24.092504    8476 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0219 04:02:24.092504    8476 command_runner.go:130] > daemonset.apps/kindnet configured
	I0219 04:02:24.093543    8476 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:02:24.093996    8476 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.246.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:02:24.095170    8476 round_trippers.go:463] GET https://172.28.246.233:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0219 04:02:24.095275    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:24.095275    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:24.095275    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:24.110620    8476 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0219 04:02:24.110620    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:24.110774    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:24.110774    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:24.110774    8476 round_trippers.go:580]     Content-Length: 291
	I0219 04:02:24.110774    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:24 GMT
	I0219 04:02:24.110872    8476 round_trippers.go:580]     Audit-Id: d51332ab-77aa-47a7-af11-45af80d89383
	I0219 04:02:24.110872    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:24.110872    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:24.110872    8476 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"416","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0219 04:02:24.111076    8476 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-657900" context rescaled to 1 replicas
	I0219 04:02:24.111176    8476 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.28.248.228 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0219 04:02:24.113590    8476 out.go:177] * Verifying Kubernetes components...
	I0219 04:02:24.126591    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:02:24.151805    8476 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:02:24.152644    8476 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.246.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:02:24.153207    8476 node_ready.go:35] waiting up to 6m0s for node "multinode-657900-m02" to be "Ready" ...
	I0219 04:02:24.153207    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:24.153207    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:24.153207    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:24.153207    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:24.161490    8476 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:02:24.161490    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:24.161490    8476 round_trippers.go:580]     Audit-Id: cff39525-17f9-4ea3-a090-c3c1f653c860
	I0219 04:02:24.161490    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:24.161490    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:24.161490    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:24.161490    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:24.161490    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:24 GMT
	I0219 04:02:24.162043    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"522","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3989 chars]
	I0219 04:02:24.663972    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:24.664050    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:24.664050    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:24.664118    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:24.667522    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:24.667522    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:24.667522    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:24 GMT
	I0219 04:02:24.667522    8476 round_trippers.go:580]     Audit-Id: f07a22e7-2b45-4938-b769-4865ceb62ff0
	I0219 04:02:24.667522    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:24.667522    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:24.667522    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:24.667522    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:24.667785    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"522","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3989 chars]
	I0219 04:02:25.168692    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:25.168811    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:25.168811    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:25.168811    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:25.172423    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:25.172423    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:25.172423    8476 round_trippers.go:580]     Audit-Id: 9bd77056-aa24-4d27-a771-3d24b9db982f
	I0219 04:02:25.172423    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:25.172423    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:25.172423    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:25.172423    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:25.172423    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:25 GMT
	I0219 04:02:25.173191    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"522","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3989 chars]
	I0219 04:02:25.670807    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:25.670807    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:25.670807    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:25.670807    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:25.674425    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:25.674890    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:25.674890    8476 round_trippers.go:580]     Audit-Id: 235037bd-c70e-4d8f-b05d-b923532d0b31
	I0219 04:02:25.674890    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:25.674960    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:25.674960    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:25.674960    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:25.674960    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:25 GMT
	I0219 04:02:25.675094    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"522","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3989 chars]
	I0219 04:02:26.175748    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:26.175837    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:26.175837    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:26.175837    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:26.182769    8476 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:02:26.182769    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:26.182769    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:26.182769    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:26 GMT
	I0219 04:02:26.182769    8476 round_trippers.go:580]     Audit-Id: 88f84504-59d8-4e67-9d8d-7fd559b7a7e4
	I0219 04:02:26.182769    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:26.182769    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:26.182769    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:26.182769    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"522","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3989 chars]
	I0219 04:02:26.183457    8476 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:02:26.663673    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:26.663673    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:26.663757    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:26.663757    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:26.667055    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:26.668105    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:26.668105    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:26 GMT
	I0219 04:02:26.668105    8476 round_trippers.go:580]     Audit-Id: 22ea66ea-f019-45fe-b034-cf703c1bffc9
	I0219 04:02:26.668105    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:26.668105    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:26.668105    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:26.668105    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:26.668222    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"522","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3989 chars]
	I0219 04:02:27.171149    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:27.171149    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:27.171149    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:27.171149    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:27.175704    8476 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:02:27.175704    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:27.175704    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:27.175704    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:27.175704    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:27.175704    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:27.176539    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:27 GMT
	I0219 04:02:27.176539    8476 round_trippers.go:580]     Audit-Id: b4949baf-f8db-4195-a5b0-3c9431e10748
	I0219 04:02:27.176701    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"522","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3989 chars]
	I0219 04:02:27.678570    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:27.678570    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:27.678570    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:27.678570    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:27.685149    8476 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:02:27.685149    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:27.685149    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:27 GMT
	I0219 04:02:27.685656    8476 round_trippers.go:580]     Audit-Id: e343f1e2-48a6-4849-9a01-60cede28c47d
	I0219 04:02:27.685656    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:27.685656    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:27.685739    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:27.685760    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:27.685787    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:28.178513    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:28.178513    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:28.178513    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:28.178513    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:28.182241    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:28.182241    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:28.182704    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:28.182704    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:28.182704    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:28.182704    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:28 GMT
	I0219 04:02:28.182704    8476 round_trippers.go:580]     Audit-Id: 655cebe5-17b4-448f-9e36-fa26c0fd70c6
	I0219 04:02:28.182704    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:28.183001    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:28.666130    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:28.666211    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:28.666211    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:28.666211    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:28.672451    8476 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:02:28.672451    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:28.672451    8476 round_trippers.go:580]     Audit-Id: fd996e91-464f-44aa-b9a0-c0cbebae7339
	I0219 04:02:28.672451    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:28.672451    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:28.672451    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:28.672451    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:28.672451    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:28 GMT
	I0219 04:02:28.672451    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:28.673459    8476 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:02:29.169485    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:29.169485    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:29.169485    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:29.169485    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:29.173046    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:29.173249    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:29.173249    8476 round_trippers.go:580]     Audit-Id: 9cd62bbb-09b8-4468-aa4e-4bff22e45616
	I0219 04:02:29.173318    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:29.173356    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:29.173356    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:29.173356    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:29.173356    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:29 GMT
	I0219 04:02:29.173356    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:29.672853    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:29.672926    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:29.672926    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:29.672926    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:29.676753    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:29.676753    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:29.677105    8476 round_trippers.go:580]     Audit-Id: 41c76660-4646-41c6-8f69-da7894170973
	I0219 04:02:29.677105    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:29.677105    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:29.677105    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:29.677105    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:29.677105    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:29 GMT
	I0219 04:02:29.677435    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:30.175996    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:30.176091    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:30.176091    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:30.176195    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:30.179012    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:30.179012    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:30.179012    8476 round_trippers.go:580]     Audit-Id: 5d719f01-0c7e-4b66-87d7-fce7a69ffe89
	I0219 04:02:30.179012    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:30.179012    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:30.180249    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:30.180249    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:30.180249    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:30 GMT
	I0219 04:02:30.180453    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:30.677397    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:30.677747    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:30.677747    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:30.677747    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:30.684324    8476 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:02:30.684324    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:30.684324    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:30.684324    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:30.684324    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:30.684324    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:30.684324    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:30 GMT
	I0219 04:02:30.684324    8476 round_trippers.go:580]     Audit-Id: 5fef9744-39bf-41d9-8a80-3b206a679ed3
	I0219 04:02:30.684877    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:30.685051    8476 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:02:31.163421    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:31.163421    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:31.163523    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:31.163523    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:31.166815    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:31.167262    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:31.167262    8476 round_trippers.go:580]     Audit-Id: b95e2d54-66e9-4e44-957b-2299df4a7864
	I0219 04:02:31.167448    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:31.167448    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:31.167448    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:31.167448    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:31.167448    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:31 GMT
	I0219 04:02:31.167448    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:31.667456    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:31.667456    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:31.667538    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:31.667538    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:31.671360    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:31.671382    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:31.671382    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:31.671382    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:31.671382    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:31.671467    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:31 GMT
	I0219 04:02:31.671467    8476 round_trippers.go:580]     Audit-Id: b8204630-dbe0-4f94-992d-8a3f6e8e50bf
	I0219 04:02:31.671467    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:31.671667    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:32.170317    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:32.170467    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:32.170467    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:32.170467    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:32.173245    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:32.174046    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:32.174046    8476 round_trippers.go:580]     Audit-Id: 95874d97-c19e-4e3b-ae5d-f2523b2d82f5
	I0219 04:02:32.174046    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:32.174046    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:32.174046    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:32.174120    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:32.174120    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:32 GMT
	I0219 04:02:32.174348    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:32.673027    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:32.673027    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:32.673027    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:32.673027    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:32.691957    8476 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0219 04:02:32.692961    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:32.692961    8476 round_trippers.go:580]     Audit-Id: 9397855d-860f-469d-b806-7c8d3701dccd
	I0219 04:02:32.692961    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:32.692961    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:32.692961    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:32.692961    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:32.692961    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:32 GMT
	I0219 04:02:32.692961    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"534","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4098 chars]
	I0219 04:02:32.694206    8476 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:02:33.163200    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:33.163284    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:33.163284    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:33.163284    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:33.167132    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:33.167214    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:33.167214    8476 round_trippers.go:580]     Audit-Id: 34859991-4475-4362-9df0-3400224c9263
	I0219 04:02:33.167214    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:33.167214    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:33.167214    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:33.167214    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:33.167214    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:33 GMT
	I0219 04:02:33.167371    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:33.673763    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:33.673875    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:33.673875    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:33.673875    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:33.677424    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:33.677424    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:33.677424    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:33.678051    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:33.678051    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:33 GMT
	I0219 04:02:33.678051    8476 round_trippers.go:580]     Audit-Id: bf8b9995-2dc5-4439-a844-fa9860bcd3e8
	I0219 04:02:33.678051    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:33.678127    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:33.678445    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:34.177549    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:34.177549    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:34.177629    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:34.177629    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:34.181628    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:34.181628    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:34.181628    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:34.181628    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:34.181766    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:34.181766    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:34 GMT
	I0219 04:02:34.181766    8476 round_trippers.go:580]     Audit-Id: 6076b02b-a8b7-4aaf-a662-5c34ebc67f92
	I0219 04:02:34.181766    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:34.181766    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:34.668709    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:34.668709    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:34.668709    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:34.668709    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:34.672667    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:34.672667    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:34.672667    8476 round_trippers.go:580]     Audit-Id: 8509a332-53fc-4592-b83a-1b502931ce26
	I0219 04:02:34.673363    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:34.673363    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:34.673363    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:34.673363    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:34.673363    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:34 GMT
	I0219 04:02:34.673602    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:35.176971    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:35.177050    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:35.177050    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:35.177050    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:35.180370    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:35.180445    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:35.180445    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:35.180445    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:35.180445    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:35.180607    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:35.180687    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:35 GMT
	I0219 04:02:35.180687    8476 round_trippers.go:580]     Audit-Id: 0d8de824-6dac-4a7d-ae45-6b8d066ad510
	I0219 04:02:35.180687    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:35.181617    8476 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:02:35.673572    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:35.673620    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:35.673620    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:35.673700    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:35.800303    8476 round_trippers.go:574] Response Status: 200 OK in 126 milliseconds
	I0219 04:02:35.800303    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:35.800303    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:35.800394    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:35.800394    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:35.800394    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:35 GMT
	I0219 04:02:35.800394    8476 round_trippers.go:580]     Audit-Id: 250f29fc-4ab3-4220-b18f-abdc5d91fe78
	I0219 04:02:35.800394    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:35.800609    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:36.164204    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:36.164204    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:36.164278    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:36.164278    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:36.167858    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:36.167858    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:36.167858    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:36.167858    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:36.167858    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:36 GMT
	I0219 04:02:36.167858    8476 round_trippers.go:580]     Audit-Id: 38f49ac1-b081-434b-a321-714e37740c16
	I0219 04:02:36.167858    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:36.167858    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:36.167858    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:36.664368    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:36.664433    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:36.664433    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:36.664433    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:36.667002    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:36.667922    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:36.667922    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:36.667922    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:36.667922    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:36.667922    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:36.667922    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:36 GMT
	I0219 04:02:36.668017    8476 round_trippers.go:580]     Audit-Id: 9c0930d0-2be5-49e4-82c3-8ecb2819e1ca
	I0219 04:02:36.668080    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:37.167402    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:37.167619    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:37.167619    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:37.167619    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:37.175007    8476 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:02:37.175007    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:37.175007    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:37 GMT
	I0219 04:02:37.175007    8476 round_trippers.go:580]     Audit-Id: beec1ac4-3e4f-45aa-9a33-0b956df6ea92
	I0219 04:02:37.175007    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:37.175007    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:37.175007    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:37.175007    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:37.175007    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:37.670644    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:37.670761    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:37.670761    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:37.670761    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:37.674180    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:37.674180    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:37.674180    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:37.674180    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:37.674180    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:37.674180    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:37 GMT
	I0219 04:02:37.674180    8476 round_trippers.go:580]     Audit-Id: 3e4958bc-9fdb-42fa-b7af-108b0a357419
	I0219 04:02:37.674180    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:37.674180    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"544","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4267 chars]
	I0219 04:02:37.675431    8476 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:02:38.171380    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:38.171474    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.171474    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.171474    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.175446    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:38.175446    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.175919    8476 round_trippers.go:580]     Audit-Id: 110a7d1f-69dd-4231-bd77-6b6625a50571
	I0219 04:02:38.175919    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.175919    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.175919    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.175919    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.175919    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.176222    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"556","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4133 chars]
	I0219 04:02:38.176716    8476 node_ready.go:49] node "multinode-657900-m02" has status "Ready":"True"
	I0219 04:02:38.176716    8476 node_ready.go:38] duration metric: took 14.0235546s waiting for node "multinode-657900-m02" to be "Ready" ...
	I0219 04:02:38.176716    8476 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:02:38.176831    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods
	I0219 04:02:38.176831    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.176831    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.176831    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.186342    8476 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0219 04:02:38.186766    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.186766    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.186832    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.186832    8476 round_trippers.go:580]     Audit-Id: 60ce43b4-eb8b-4f1c-8b9d-b467db799e24
	I0219 04:02:38.186832    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.186832    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.186832    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.187344    8476 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"556"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"412","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67438 chars]
	I0219 04:02:38.190905    8476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.190905    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:02:38.190905    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.190905    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.190905    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.196120    8476 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:02:38.196120    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.196120    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.196120    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.196253    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.196253    8476 round_trippers.go:580]     Audit-Id: 66855fb9-2362-4b42-bb63-d4535c10d5dc
	I0219 04:02:38.196253    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.196253    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.196374    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"412","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0219 04:02:38.196793    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:38.196793    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.196793    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.197009    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.200826    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:38.200826    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.200826    8476 round_trippers.go:580]     Audit-Id: 6a797d8f-eefe-40b0-b2e2-910e1a100ccd
	I0219 04:02:38.200893    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.200893    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.200893    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.200893    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.200893    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.201194    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"422","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0219 04:02:38.201595    8476 pod_ready.go:92] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"True"
	I0219 04:02:38.201595    8476 pod_ready.go:81] duration metric: took 10.6893ms waiting for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.201595    8476 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.201713    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-657900
	I0219 04:02:38.201713    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.201713    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.201772    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.203016    8476 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:02:38.203016    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.203974    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.203974    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.204007    8476 round_trippers.go:580]     Audit-Id: c681a44b-ad28-4e2b-9182-01e5f49a36ab
	I0219 04:02:38.204007    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.204007    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.204007    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.204222    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-657900","namespace":"kube-system","uid":"a9bb99b7-a011-4c3a-b705-922abff5b9d9","resourceVersion":"266","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.246.233:2379","kubernetes.io/config.hash":"b8463a9c9ed8ec609365197de83e82b6","kubernetes.io/config.mirror":"b8463a9c9ed8ec609365197de83e82b6","kubernetes.io/config.seen":"2023-02-19T04:00:19.445277145Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5856 chars]
	I0219 04:02:38.204679    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:38.204679    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.204679    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.204679    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.208084    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:38.208084    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.208084    8476 round_trippers.go:580]     Audit-Id: dc913205-ef16-4c3a-8ec4-9edc1400e9fd
	I0219 04:02:38.208084    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.208084    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.208084    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.208084    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.208084    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.208084    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"422","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0219 04:02:38.209030    8476 pod_ready.go:92] pod "etcd-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:02:38.209030    8476 pod_ready.go:81] duration metric: took 7.4355ms waiting for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.209030    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.209030    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-657900
	I0219 04:02:38.209030    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.209030    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.209030    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.213199    8476 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:02:38.213199    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.213199    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.213887    8476 round_trippers.go:580]     Audit-Id: 4fc4d51e-2489-46c8-9460-4169f46d4112
	I0219 04:02:38.213887    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.213923    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.213923    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.213923    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.214100    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-657900","namespace":"kube-system","uid":"9e6fb2d2-5c86-496f-a76c-9c0c6f92080e","resourceVersion":"270","creationTimestamp":"2023-02-19T04:00:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.246.233:8443","kubernetes.io/config.hash":"1ff63a085e26860683ab640202bbdd7b","kubernetes.io/config.mirror":"1ff63a085e26860683ab640202bbdd7b","kubernetes.io/config.seen":"2023-02-19T04:00:05.502958001Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
	I0219 04:02:38.214711    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:38.214711    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.214711    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.214711    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.217469    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:38.217469    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.217469    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.218408    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.218489    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.218513    8476 round_trippers.go:580]     Audit-Id: 6e00c74c-44cd-4b27-a002-d9556a4296ee
	I0219 04:02:38.218513    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.218513    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.218730    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"422","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0219 04:02:38.218730    8476 pod_ready.go:92] pod "kube-apiserver-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:02:38.218730    8476 pod_ready.go:81] duration metric: took 9.6997ms waiting for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.218730    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.219257    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-657900
	I0219 04:02:38.219257    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.219257    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.219367    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.222983    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:38.222983    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.223055    8476 round_trippers.go:580]     Audit-Id: bef6ffe1-2e22-4eb0-811a-c4c244875362
	I0219 04:02:38.223055    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.223055    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.223055    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.223055    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.223055    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.223055    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-657900","namespace":"kube-system","uid":"463b901e-dd04-46fc-91a3-9917b12590ff","resourceVersion":"294","creationTimestamp":"2023-02-19T04:00:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.mirror":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.seen":"2023-02-19T04:00:19.445306645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6957 chars]
	I0219 04:02:38.223699    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:38.223699    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.223699    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.223699    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.226313    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:38.227069    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.227069    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.227069    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.227069    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.227069    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.227069    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.227069    8476 round_trippers.go:580]     Audit-Id: 63dbf535-0bda-4749-93ca-ab14e3eb60fe
	I0219 04:02:38.227069    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"422","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0219 04:02:38.227697    8476 pod_ready.go:92] pod "kube-controller-manager-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:02:38.227697    8476 pod_ready.go:81] duration metric: took 8.967ms waiting for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.227697    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.382259    8476 request.go:622] Waited for 154.5629ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:02:38.382259    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:02:38.382259    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.382259    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.382259    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.386897    8476 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:02:38.386980    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.386980    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.386980    8476 round_trippers.go:580]     Audit-Id: b2ab6298-888b-4f58-a220-7bc035a53d29
	I0219 04:02:38.386980    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.387102    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.387102    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.387102    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.387249    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h9z4","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ff10d29-0b2a-4046-a946-90b1a4d8bcb7","resourceVersion":"541","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0219 04:02:38.586312    8476 request.go:622] Waited for 198.303ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:38.586408    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:02:38.586408    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.586408    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.586408    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.590871    8476 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:02:38.590871    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.591313    8476 round_trippers.go:580]     Audit-Id: 36e43576-6c1e-4e4c-af55-492ebfa2f714
	I0219 04:02:38.591313    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.591313    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.591313    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.591313    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.591313    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.591660    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"556","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4133 chars]
	I0219 04:02:38.591964    8476 pod_ready.go:92] pod "kube-proxy-8h9z4" in "kube-system" namespace has status "Ready":"True"
	I0219 04:02:38.591964    8476 pod_ready.go:81] duration metric: took 364.2681ms waiting for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.591964    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.773145    8476 request.go:622] Waited for 180.9253ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:02:38.773452    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:02:38.773452    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.773452    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.773452    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.777157    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:38.777157    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.777157    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.777157    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.777157    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.777157    8476 round_trippers.go:580]     Audit-Id: 947239f0-606f-41fd-80b8-6a53cfcf2c45
	I0219 04:02:38.777157    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.777157    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.777157    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kcm8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa","resourceVersion":"383","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0219 04:02:38.976602    8476 request.go:622] Waited for 198.446ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:38.976702    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:38.976702    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:38.976702    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:38.976971    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:38.980268    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:38.981029    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:38.981029    8476 round_trippers.go:580]     Audit-Id: 264b1c85-8fd3-445a-a188-638eafd306cc
	I0219 04:02:38.981029    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:38.981029    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:38.981029    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:38.981029    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:38.981029    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:38 GMT
	I0219 04:02:38.981202    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"422","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0219 04:02:38.981867    8476 pod_ready.go:92] pod "kube-proxy-kcm8m" in "kube-system" namespace has status "Ready":"True"
	I0219 04:02:38.981867    8476 pod_ready.go:81] duration metric: took 389.9052ms waiting for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:38.981867    8476 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:39.181564    8476 request.go:622] Waited for 199.2896ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:02:39.181564    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:02:39.181564    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:39.181564    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:39.181564    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:39.184177    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:39.185211    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:39.185253    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:39.185253    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:39.185253    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:39 GMT
	I0219 04:02:39.185304    8476 round_trippers.go:580]     Audit-Id: b97d3e22-f3b2-4d9b-b643-6af2306e7bcd
	I0219 04:02:39.185304    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:39.185304    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:39.185610    8476 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-657900","namespace":"kube-system","uid":"ba38eff9-ab82-463a-bb6f-8af5e4599f15","resourceVersion":"267","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.mirror":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.seen":"2023-02-19T04:00:19.445308045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4687 chars]
	I0219 04:02:39.384832    8476 request.go:622] Waited for 198.6298ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:39.384832    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes/multinode-657900
	I0219 04:02:39.384832    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:39.384832    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:39.384832    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:39.387551    8476 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:02:39.388538    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:39.388538    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:39.388538    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:39.388538    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:39 GMT
	I0219 04:02:39.388642    8476 round_trippers.go:580]     Audit-Id: c3ce0fe2-0d2f-4c4b-bce1-068a1eedc9f8
	I0219 04:02:39.388642    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:39.388642    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:39.388797    8476 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"422","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0219 04:02:39.389326    8476 pod_ready.go:92] pod "kube-scheduler-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:02:39.389326    8476 pod_ready.go:81] duration metric: took 407.4596ms waiting for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:02:39.389326    8476 pod_ready.go:38] duration metric: took 1.2126142s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:02:39.389412    8476 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:02:39.398396    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:02:39.422208    8476 system_svc.go:56] duration metric: took 32.8828ms WaitForService to wait for kubelet.
	I0219 04:02:39.422208    8476 kubeadm.go:578] duration metric: took 15.311083s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:02:39.422208    8476 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:02:39.571499    8476 request.go:622] Waited for 149.2912ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.246.233:8443/api/v1/nodes
	I0219 04:02:39.571499    8476 round_trippers.go:463] GET https://172.28.246.233:8443/api/v1/nodes
	I0219 04:02:39.571499    8476 round_trippers.go:469] Request Headers:
	I0219 04:02:39.571499    8476 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:02:39.571499    8476 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:02:39.575185    8476 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:02:39.575185    8476 round_trippers.go:577] Response Headers:
	I0219 04:02:39.575185    8476 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:02:39.576195    8476 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:02:39.576247    8476 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:02:39 GMT
	I0219 04:02:39.576354    8476 round_trippers.go:580]     Audit-Id: 6773ee22-cf34-4fc8-b3ed-17a1097fa8da
	I0219 04:02:39.576354    8476 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:02:39.576397    8476 round_trippers.go:580]     Content-Type: application/json
	I0219 04:02:39.576495    8476 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"557"},"items":[{"metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"422","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10293 chars]
	I0219 04:02:39.577678    8476 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:02:39.577737    8476 node_conditions.go:123] node cpu capacity is 2
	I0219 04:02:39.577737    8476 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:02:39.577737    8476 node_conditions.go:123] node cpu capacity is 2
	I0219 04:02:39.577856    8476 node_conditions.go:105] duration metric: took 155.5287ms to run NodePressure ...
	I0219 04:02:39.577856    8476 start.go:228] waiting for startup goroutines ...
	I0219 04:02:39.577975    8476 start.go:242] writing updated cluster config ...
	I0219 04:02:39.589606    8476 ssh_runner.go:195] Run: rm -f paused
	I0219 04:02:39.776434    8476 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:02:39.781630    8476 out.go:177] 
	W0219 04:02:39.784410    8476 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:02:39.786294    8476 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:02:39.793375    8476 out.go:177] * Done! kubectl is now configured to use "multinode-657900" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-02-19 03:59:04 UTC, ends at Sun 2023-02-19 04:03:31 UTC. --
	Feb 19 04:00:42 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:42.337038265Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3cc329202fb1e6db484d85f0744680d8032f91dd39570bd54964dac70b0e44ab pid=5549 runtime=io.containerd.runc.v2
	Feb 19 04:00:43 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:43.973371361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:00:43 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:43.973563461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:00:43 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:43.973616061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:00:43 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:43.974213262Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/addeabc5c2e0487fa2299dff442124f83dced0e0e78d7d7cce9d4058b52ed641 pid=5755 runtime=io.containerd.runc.v2
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.012520696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.012803996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.013088196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.014657698Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e4a0c5408369262c8d9b6c3c49988c0f6f614b055a4366a3e6028bba092808fe pid=5780 runtime=io.containerd.runc.v2
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.742921040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.743059440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.743084740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.743892040Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b0d34c23d93e6365df8eccc588f64d9f74f67fa0640152490c47508e539f9ed9 pid=5875 runtime=io.containerd.runc.v2
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.864323647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.864495747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.864518647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:00:44 multinode-657900 dockerd[1154]: time="2023-02-19T04:00:44.864809047Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0eb749d12a49525400d75ea04de3559232d3579479e91d60309a851c4597e790 pid=5924 runtime=io.containerd.runc.v2
	Feb 19 04:02:50 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:50.902545557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:02:50 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:50.902715754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:02:50 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:50.902735554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:02:50 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:50.903076349Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d42b0d327c16fb67a7330bbb94f4e9f59ba52c44e2b06c4c3b6a65221692a6c5 pid=7218 runtime=io.containerd.runc.v2
	Feb 19 04:02:52 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:52.824633481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:02:52 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:52.824814878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:02:52 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:52.824832778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:02:52 multinode-657900 dockerd[1154]: time="2023-02-19T04:02:52.825139673Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9a54a7d3eef7d123d586c24a9fb328a3ebe8b878f166a314a1854dd8eff8fb77 pid=7311 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	9a54a7d3eef7d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   39 seconds ago      Running             busybox                   0                   d42b0d327c16f
	0eb749d12a495       5185b96f0becf                                                                                         2 minutes ago       Running             coredns                   0                   addeabc5c2e04
	b0d34c23d93e6       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       0                   e4a0c54083692
	3cc329202fb1e       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              2 minutes ago       Running             kindnet-cni               0                   7c26aac822f9e
	ca0d83d4d696e       46a6bb3c77ce0                                                                                         2 minutes ago       Running             kube-proxy                0                   35c5df6e4d7f1
	4c9cc5564cf44       fce326961ae2d                                                                                         3 minutes ago       Running             etcd                      0                   0f7e494e02218
	2f34e1aaa1b5f       655493523f607                                                                                         3 minutes ago       Running             kube-scheduler            0                   9cad608b4ab6e
	105abb87f41ff       e9c08e11b07f6                                                                                         3 minutes ago       Running             kube-controller-manager   0                   a766f49230c1f
	55e12988bbaef       deb04688c4a35                                                                                         3 minutes ago       Running             kube-apiserver            0                   ff2f979f1ced3
	
	* 
	* ==> coredns [0eb749d12a49] <==
	* [INFO] 10.244.1.2:45935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000266697s
	[INFO] 10.244.0.3:51743 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000320395s
	[INFO] 10.244.0.3:55999 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000196597s
	[INFO] 10.244.0.3:36128 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109599s
	[INFO] 10.244.0.3:33835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154698s
	[INFO] 10.244.0.3:46693 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000138799s
	[INFO] 10.244.0.3:40538 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211898s
	[INFO] 10.244.0.3:40861 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069599s
	[INFO] 10.244.0.3:43027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147798s
	[INFO] 10.244.1.2:41835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215397s
	[INFO] 10.244.1.2:60791 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068899s
	[INFO] 10.244.1.2:42879 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144698s
	[INFO] 10.244.1.2:52603 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197897s
	[INFO] 10.244.0.3:53656 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244797s
	[INFO] 10.244.0.3:52084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000363795s
	[INFO] 10.244.0.3:35462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115198s
	[INFO] 10.244.0.3:56378 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202797s
	[INFO] 10.244.1.2:54357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227497s
	[INFO] 10.244.1.2:36124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112098s
	[INFO] 10.244.1.2:48224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084699s
	[INFO] 10.244.1.2:56851 - 5 "PTR IN 1.240.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093899s
	[INFO] 10.244.0.3:44657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000399095s
	[INFO] 10.244.0.3:49393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249097s
	[INFO] 10.244.0.3:54475 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181598s
	[INFO] 10.244.0.3:51210 - 5 "PTR IN 1.240.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000505495s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-657900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-657900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61
	                    minikube.k8s.io/name=multinode-657900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_19T04_00_21_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:00:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-657900
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:03:23 +0000   Sun, 19 Feb 2023 04:00:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:03:23 +0000   Sun, 19 Feb 2023 04:00:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:03:23 +0000   Sun, 19 Feb 2023 04:00:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:03:23 +0000   Sun, 19 Feb 2023 04:00:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.246.233
	  Hostname:    multinode-657900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 42f3b9a5530b4c36846ff44f37afac9e
	  System UUID:                1ab1fdf1-fba4-7b4d-9307-f55ed7af7e26
	  Boot ID:                    247feda5-78dd-4c04-9e98-48bf99561090
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-xg2wx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 coredns-787d4945fb-9mvfg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m59s
	  kube-system                 etcd-multinode-657900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m13s
	  kube-system                 kindnet-lvjng                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m59s
	  kube-system                 kube-apiserver-multinode-657900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m16s
	  kube-system                 kube-controller-manager-multinode-657900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kube-proxy-kcm8m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  kube-system                 kube-scheduler-multinode-657900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m27s (x5 over 3m27s)  kubelet          Node multinode-657900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x5 over 3m27s)  kubelet          Node multinode-657900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x4 over 3m27s)  kubelet          Node multinode-657900 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m13s                  kubelet          Node multinode-657900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s                  kubelet          Node multinode-657900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s                  kubelet          Node multinode-657900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                     node-controller  Node multinode-657900 event: Registered Node multinode-657900 in Controller
	  Normal  NodeReady                2m49s                  kubelet          Node multinode-657900 status is now: NodeReady
	
	
	Name:               multinode-657900-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-657900-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-657900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:03:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:02:53 +0000   Sun, 19 Feb 2023 04:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:02:53 +0000   Sun, 19 Feb 2023 04:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:02:53 +0000   Sun, 19 Feb 2023 04:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:02:53 +0000   Sun, 19 Feb 2023 04:02:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.248.228
	  Hostname:    multinode-657900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 455147b72b0a48baaffe061dad19dadd
	  System UUID:                9d847d5f-b13d-1b42-8a73-2f59d1ebf938
	  Boot ID:                    b9c15639-7e78-4a89-bc6d-a7564d9762ba
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-brhr9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kindnet-fp2c9               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      70s
	  kube-system                 kube-proxy-8h9z4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x2 over 70s)  kubelet          Node multinode-657900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x2 over 70s)  kubelet          Node multinode-657900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x2 over 70s)  kubelet          Node multinode-657900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                node-controller  Node multinode-657900-m02 event: Registered Node multinode-657900-m02 in Controller
	  Normal  NodeReady                55s                kubelet          Node multinode-657900-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.436363] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +2.216610] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +1.174573] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.906710] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +16.117572] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.140638] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[ +22.847200] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[  +2.090601] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.598506] systemd-fstab-generator[1077]: Ignoring "noauto" for root device
	[  +0.482398] systemd-fstab-generator[1115]: Ignoring "noauto" for root device
	[  +0.179356] systemd-fstab-generator[1126]: Ignoring "noauto" for root device
	[  +0.223750] systemd-fstab-generator[1139]: Ignoring "noauto" for root device
	[  +1.641251] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +0.153981] systemd-fstab-generator[1297]: Ignoring "noauto" for root device
	[  +0.174954] systemd-fstab-generator[1308]: Ignoring "noauto" for root device
	[  +0.177023] systemd-fstab-generator[1319]: Ignoring "noauto" for root device
	[Feb19 04:00] systemd-fstab-generator[1567]: Ignoring "noauto" for root device
	[  +0.882290] kauditd_printk_skb: 68 callbacks suppressed
	[ +13.504254] systemd-fstab-generator[2608]: Ignoring "noauto" for root device
	[ +14.878872] hrtimer: interrupt took 493101 ns
	[  +0.735558] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.432477] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [4c9cc5564cf4] <==
	* {"level":"warn","ts":"2023-02-19T04:02:08.287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:02:07.741Z","time spent":"545.762417ms","remote":"127.0.0.1:58618","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.28.246.233\" mod_revision:474 > success:<request_put:<key:\"/registry/masterleases/172.28.246.233\" value_size:67 lease:8748953006622288238 >> failure:<request_range:<key:\"/registry/masterleases/172.28.246.233\" > >"}
	{"level":"info","ts":"2023-02-19T04:02:08.287Z","caller":"traceutil/trace.go:171","msg":"trace[49431997] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"354.108276ms","start":"2023-02-19T04:02:07.933Z","end":"2023-02-19T04:02:08.287Z","steps":["trace[49431997] 'process raft request'  (duration: 353.668176ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:02:08.287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:02:07.933Z","time spent":"354.403076ms","remote":"127.0.0.1:58668","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-apiserver-ir66ijhamkzicwbfwsejbncqwe\" mod_revision:475 > success:<request_put:<key:\"/registry/leases/kube-system/kube-apiserver-ir66ijhamkzicwbfwsejbncqwe\" value_size:586 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-apiserver-ir66ijhamkzicwbfwsejbncqwe\" > >"}
	{"level":"warn","ts":"2023-02-19T04:02:09.005Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"357.315976ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8748953006622288246 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:481 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:02:09.005Z","caller":"traceutil/trace.go:171","msg":"trace[744910486] linearizableReadLoop","detail":"{readStateIndex:518; appliedIndex:517; }","duration":"556.970919ms","start":"2023-02-19T04:02:08.448Z","end":"2023-02-19T04:02:09.005Z","steps":["trace[744910486] 'read index received'  (duration: 199.513843ms)","trace[744910486] 'applied index is now lower than readState.Index'  (duration: 357.455776ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:02:09.006Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"557.161019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-19T04:02:09.006Z","caller":"traceutil/trace.go:171","msg":"trace[1900072185] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:484; }","duration":"557.79202ms","start":"2023-02-19T04:02:08.448Z","end":"2023-02-19T04:02:09.006Z","steps":["trace[1900072185] 'agreement among raft nodes before linearized reading'  (duration: 557.029419ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:02:09.006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:02:08.448Z","time spent":"557.85572ms","remote":"127.0.0.1:58608","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-02-19T04:02:09.006Z","caller":"traceutil/trace.go:171","msg":"trace[149301640] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"643.361138ms","start":"2023-02-19T04:02:08.363Z","end":"2023-02-19T04:02:09.006Z","steps":["trace[149301640] 'process raft request'  (duration: 285.156261ms)","trace[149301640] 'compare'  (duration: 356.296276ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:02:09.007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:02:08.363Z","time spent":"643.863738ms","remote":"127.0.0.1:58638","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:481 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-02-19T04:02:11.238Z","caller":"traceutil/trace.go:171","msg":"trace[817731830] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"216.404046ms","start":"2023-02-19T04:02:11.021Z","end":"2023-02-19T04:02:11.238Z","steps":["trace[817731830] 'process raft request'  (duration: 216.301446ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:02:11.595Z","caller":"traceutil/trace.go:171","msg":"trace[181763704] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:519; }","duration":"147.045531ms","start":"2023-02-19T04:02:11.448Z","end":"2023-02-19T04:02:11.595Z","steps":["trace[181763704] 'read index received'  (duration: 53.942411ms)","trace[181763704] 'applied index is now lower than readState.Index'  (duration: 93.10212ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:02:11.595Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"147.236431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-19T04:02:11.595Z","caller":"traceutil/trace.go:171","msg":"trace[1654355928] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"147.283331ms","start":"2023-02-19T04:02:11.448Z","end":"2023-02-19T04:02:11.595Z","steps":["trace[1654355928] 'agreement among raft nodes before linearized reading'  (duration: 147.162831ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:02:11.596Z","caller":"traceutil/trace.go:171","msg":"trace[747344297] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"212.502045ms","start":"2023-02-19T04:02:11.383Z","end":"2023-02-19T04:02:11.596Z","steps":["trace[747344297] 'process raft request'  (duration: 118.773925ms)","trace[747344297] 'compare'  (duration: 92.92052ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:02:13.597Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"146.87313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-19T04:02:13.597Z","caller":"traceutil/trace.go:171","msg":"trace[1298760145] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:487; }","duration":"147.03233ms","start":"2023-02-19T04:02:13.450Z","end":"2023-02-19T04:02:13.597Z","steps":["trace[1298760145] 'range keys from in-memory index tree'  (duration: 146.74413ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:02:13.597Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"193.87844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4644"}
	{"level":"info","ts":"2023-02-19T04:02:13.597Z","caller":"traceutil/trace.go:171","msg":"trace[1300779877] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:487; }","duration":"193.95374ms","start":"2023-02-19T04:02:13.403Z","end":"2023-02-19T04:02:13.597Z","steps":["trace[1300779877] 'range keys from in-memory index tree'  (duration: 193.59414ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:02:35.655Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"148.063776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2023-02-19T04:02:35.656Z","caller":"traceutil/trace.go:171","msg":"trace[2000177527] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:546; }","duration":"148.43217ms","start":"2023-02-19T04:02:35.507Z","end":"2023-02-19T04:02:35.656Z","steps":["trace[2000177527] 'range keys from in-memory index tree'  (duration: 147.236091ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:02:35.790Z","caller":"traceutil/trace.go:171","msg":"trace[1075457504] linearizableReadLoop","detail":"{readStateIndex:590; appliedIndex:589; }","duration":"122.066055ms","start":"2023-02-19T04:02:35.668Z","end":"2023-02-19T04:02:35.790Z","steps":["trace[1075457504] 'read index received'  (duration: 121.949057ms)","trace[1075457504] 'applied index is now lower than readState.Index'  (duration: 116.398µs)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:02:35.790Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.29605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-657900-m02\" ","response":"range_response_count:1 size:3863"}
	{"level":"info","ts":"2023-02-19T04:02:35.790Z","caller":"traceutil/trace.go:171","msg":"trace[594137289] range","detail":"{range_begin:/registry/minions/multinode-657900-m02; range_end:; response_count:1; response_revision:547; }","duration":"122.35865ms","start":"2023-02-19T04:02:35.668Z","end":"2023-02-19T04:02:35.790Z","steps":["trace[594137289] 'agreement among raft nodes before linearized reading'  (duration: 122.150253ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:02:35.790Z","caller":"traceutil/trace.go:171","msg":"trace[227575274] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"128.142642ms","start":"2023-02-19T04:02:35.662Z","end":"2023-02-19T04:02:35.790Z","steps":["trace[227575274] 'process raft request'  (duration: 127.805048ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:03:32 up 4 min,  0 users,  load average: 0.45, 0.45, 0.22
	Linux multinode-657900 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [55e12988bbae] <==
	* I0219 04:00:15.914340       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0219 04:00:16.283088       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0219 04:00:16.290863       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0219 04:00:16.290880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0219 04:00:17.376466       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0219 04:00:17.464058       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0219 04:00:17.645605       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0219 04:00:17.658797       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.28.246.233]
	I0219 04:00:17.659554       1 controller.go:615] quota admission added evaluator for: endpoints
	I0219 04:00:17.673732       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0219 04:00:18.426951       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0219 04:00:19.275425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0219 04:00:19.298370       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0219 04:00:19.328911       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0219 04:00:33.104673       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0219 04:00:33.285970       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0219 04:02:08.288788       1 trace.go:219] Trace[1812155436]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.28.246.233,type:*v1.Endpoints,resource:apiServerIPInfo (19-Feb-2023 04:02:07.636) (total time: 652ms):
	Trace[1812155436]: ---"Transaction prepared" 103ms (04:02:07.741)
	Trace[1812155436]: ---"Txn call completed" 547ms (04:02:08.288)
	Trace[1812155436]: [652.44004ms] [652.44004ms] END
	I0219 04:02:09.008239       1 trace.go:219] Trace[1518463197]: "Update" accept:application/json, */*,audit-id:7362a227-3099-4a4c-9bcf-423d900f0fcd,client:172.28.246.233,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (19-Feb-2023 04:02:08.361) (total time: 647ms):
	Trace[1518463197]: ["GuaranteedUpdate etcd3" audit-id:7362a227-3099-4a4c-9bcf-423d900f0fcd,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 646ms (04:02:08.361)
	Trace[1518463197]:  ---"Txn call completed" 645ms (04:02:09.007)]
	Trace[1518463197]: [647.127638ms] [647.127638ms] END
	E0219 04:02:59.877072       1 upgradeaware.go:426] Error proxying data from client to backend: write tcp 172.28.246.233:32768->172.28.246.233:10250: write: connection reset by peer
	
	* 
	* ==> kube-controller-manager [105abb87f41f] <==
	* W0219 04:00:32.464991       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-657900. Assuming now as a timestamp.
	I0219 04:00:32.465270       1 node_lifecycle_controller.go:1204] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0219 04:00:32.496969       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:00:32.551705       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:00:32.931259       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:00:32.931281       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0219 04:00:32.955046       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:00:33.119967       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0219 04:00:33.182410       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0219 04:00:33.313357       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kcm8m"
	I0219 04:00:33.324972       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lvjng"
	I0219 04:00:33.413997       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-4f26f"
	I0219 04:00:33.433995       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-9mvfg"
	I0219 04:00:33.517944       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-4f26f"
	I0219 04:00:47.468066       1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0219 04:02:22.763946       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-657900-m02" does not exist
	I0219 04:02:22.783544       1 range_allocator.go:372] Set node multinode-657900-m02 PodCIDR to [10.244.1.0/24]
	I0219 04:02:22.824883       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fp2c9"
	I0219 04:02:22.825255       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8h9z4"
	W0219 04:02:27.487219       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-657900-m02. Assuming now as a timestamp.
	I0219 04:02:27.487511       1 event.go:294] "Event occurred" object="multinode-657900-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-657900-m02 event: Registered Node multinode-657900-m02 in Controller"
	W0219 04:02:37.904032       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	I0219 04:02:50.244943       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0219 04:02:50.284045       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-brhr9"
	I0219 04:02:50.312363       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-xg2wx"
	
	* 
	* ==> kube-proxy [ca0d83d4d696] <==
	* I0219 04:00:34.558667       1 node.go:163] Successfully retrieved node IP: 172.28.246.233
	I0219 04:00:34.558850       1 server_others.go:109] "Detected node IP" address="172.28.246.233"
	I0219 04:00:34.559194       1 server_others.go:535] "Using iptables proxy"
	I0219 04:00:34.635353       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:00:34.635594       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:00:34.635644       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:00:34.636174       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:00:34.636196       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:00:34.638037       1 config.go:317] "Starting service config controller"
	I0219 04:00:34.638063       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:00:34.638627       1 config.go:444] "Starting node config controller"
	I0219 04:00:34.638637       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:00:34.638676       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:00:34.638685       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:00:34.738833       1 shared_informer.go:280] Caches are synced for service config
	I0219 04:00:34.738833       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:00:34.738850       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2f34e1aaa1b5] <==
	* W0219 04:00:16.457014       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0219 04:00:16.457045       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0219 04:00:16.516091       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0219 04:00:16.516342       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0219 04:00:16.535487       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0219 04:00:16.535867       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0219 04:00:16.552782       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0219 04:00:16.552819       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0219 04:00:16.579142       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.579268       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.627188       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.627590       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.693819       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.694379       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.695430       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.695471       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.703281       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0219 04:00:16.709622       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0219 04:00:16.775568       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0219 04:00:16.777199       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0219 04:00:16.830482       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0219 04:00:16.830549       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0219 04:00:16.958142       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0219 04:00:16.958279       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0219 04:00:18.851632       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-02-19 03:59:04 UTC, ends at Sun 2023-02-19 04:03:32 UTC. --
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.353920    2635 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.354045    2635 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.419939    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4r2\" (UniqueName: \"kubernetes.io/projected/df7a9269-516f-4b66-af0f-429b21ee31cc-kube-api-access-nm4r2\") pod \"kindnet-lvjng\" (UID: \"df7a9269-516f-4b66-af0f-429b21ee31cc\") " pod="kube-system/kindnet-lvjng"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.420017    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ce14b4f-6df3-4822-ac2b-06f3417e8eaa-lib-modules\") pod \"kube-proxy-kcm8m\" (UID: \"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa\") " pod="kube-system/kube-proxy-kcm8m"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.427902    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/df7a9269-516f-4b66-af0f-429b21ee31cc-cni-cfg\") pod \"kindnet-lvjng\" (UID: \"df7a9269-516f-4b66-af0f-429b21ee31cc\") " pod="kube-system/kindnet-lvjng"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.428047    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df7a9269-516f-4b66-af0f-429b21ee31cc-lib-modules\") pod \"kindnet-lvjng\" (UID: \"df7a9269-516f-4b66-af0f-429b21ee31cc\") " pod="kube-system/kindnet-lvjng"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.428251    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ce14b4f-6df3-4822-ac2b-06f3417e8eaa-kube-proxy\") pod \"kube-proxy-kcm8m\" (UID: \"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa\") " pod="kube-system/kube-proxy-kcm8m"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.428696    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ce14b4f-6df3-4822-ac2b-06f3417e8eaa-xtables-lock\") pod \"kube-proxy-kcm8m\" (UID: \"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa\") " pod="kube-system/kube-proxy-kcm8m"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.428825    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcf5c\" (UniqueName: \"kubernetes.io/projected/8ce14b4f-6df3-4822-ac2b-06f3417e8eaa-kube-api-access-pcf5c\") pod \"kube-proxy-kcm8m\" (UID: \"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa\") " pod="kube-system/kube-proxy-kcm8m"
	Feb 19 04:00:33 multinode-657900 kubelet[2635]: I0219 04:00:33.429171    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df7a9269-516f-4b66-af0f-429b21ee31cc-xtables-lock\") pod \"kindnet-lvjng\" (UID: \"df7a9269-516f-4b66-af0f-429b21ee31cc\") " pod="kube-system/kindnet-lvjng"
	Feb 19 04:00:38 multinode-657900 kubelet[2635]: I0219 04:00:38.318037    2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c26aac822f9e43b7a21d565f1ce7d9bbc37aefb70c83ab1c9c3c4cb08ed6029"
	Feb 19 04:00:38 multinode-657900 kubelet[2635]: I0219 04:00:38.385960    2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kcm8m" podStartSLOduration=5.385922583 pod.CreationTimestamp="2023-02-19 04:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:00:38.385638882 +0000 UTC m=+19.159046667" watchObservedRunningTime="2023-02-19 04:00:38.385922583 +0000 UTC m=+19.159330368"
	Feb 19 04:00:43 multinode-657900 kubelet[2635]: I0219 04:00:43.375782    2635 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Feb 19 04:00:43 multinode-657900 kubelet[2635]: I0219 04:00:43.427384    2635 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:00:43 multinode-657900 kubelet[2635]: I0219 04:00:43.452580    2635 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:00:43 multinode-657900 kubelet[2635]: I0219 04:00:43.543170    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpw64\" (UniqueName: \"kubernetes.io/projected/38bce706-085e-44e0-bf5e-97cbdebb682e-kube-api-access-gpw64\") pod \"coredns-787d4945fb-9mvfg\" (UID: \"38bce706-085e-44e0-bf5e-97cbdebb682e\") " pod="kube-system/coredns-787d4945fb-9mvfg"
	Feb 19 04:00:43 multinode-657900 kubelet[2635]: I0219 04:00:43.543290    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38bce706-085e-44e0-bf5e-97cbdebb682e-config-volume\") pod \"coredns-787d4945fb-9mvfg\" (UID: \"38bce706-085e-44e0-bf5e-97cbdebb682e\") " pod="kube-system/coredns-787d4945fb-9mvfg"
	Feb 19 04:00:43 multinode-657900 kubelet[2635]: I0219 04:00:43.543321    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4fcb063a-be6a-41e8-9379-c8f7cf16a165-tmp\") pod \"storage-provisioner\" (UID: \"4fcb063a-be6a-41e8-9379-c8f7cf16a165\") " pod="kube-system/storage-provisioner"
	Feb 19 04:00:43 multinode-657900 kubelet[2635]: I0219 04:00:43.543345    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstsx\" (UniqueName: \"kubernetes.io/projected/4fcb063a-be6a-41e8-9379-c8f7cf16a165-kube-api-access-lstsx\") pod \"storage-provisioner\" (UID: \"4fcb063a-be6a-41e8-9379-c8f7cf16a165\") " pod="kube-system/storage-provisioner"
	Feb 19 04:00:45 multinode-657900 kubelet[2635]: I0219 04:00:45.677676    2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lvjng" podStartSLOduration=-9.22337202417714e+09 pod.CreationTimestamp="2023-02-19 04:00:33 +0000 UTC" firstStartedPulling="2023-02-19 04:00:38.32539402 +0000 UTC m=+19.098801805" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:00:43.62964515 +0000 UTC m=+24.403053035" watchObservedRunningTime="2023-02-19 04:00:45.677635848 +0000 UTC m=+26.451043633"
	Feb 19 04:00:45 multinode-657900 kubelet[2635]: I0219 04:00:45.710027    2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-9mvfg" podStartSLOduration=12.709988176 pod.CreationTimestamp="2023-02-19 04:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:00:45.709527975 +0000 UTC m=+26.482935760" watchObservedRunningTime="2023-02-19 04:00:45.709988176 +0000 UTC m=+26.483396061"
	Feb 19 04:00:45 multinode-657900 kubelet[2635]: I0219 04:00:45.710332    2635 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=9.710305276 pod.CreationTimestamp="2023-02-19 04:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:00:45.678511649 +0000 UTC m=+26.451919434" watchObservedRunningTime="2023-02-19 04:00:45.710305276 +0000 UTC m=+26.483713061"
	Feb 19 04:02:50 multinode-657900 kubelet[2635]: I0219 04:02:50.335026    2635 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:02:50 multinode-657900 kubelet[2635]: I0219 04:02:50.498229    2635 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrctm\" (UniqueName: \"kubernetes.io/projected/ab8a5a92-0809-4a36-80a2-d969e4a19341-kube-api-access-qrctm\") pod \"busybox-6b86dd6d48-xg2wx\" (UID: \"ab8a5a92-0809-4a36-80a2-d969e4a19341\") " pod="default/busybox-6b86dd6d48-xg2wx"
	Feb 19 04:02:51 multinode-657900 kubelet[2635]: I0219 04:02:51.545047    2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42b0d327c16fb67a7330bbb94f4e9f59ba52c44e2b06c4c3b6a65221692a6c5"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-657900 -n multinode-657900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-657900 -n multinode-657900: (4.9815042s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-657900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (39.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (348.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-657900
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-657900
E0219 04:10:41.840847   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-657900: (1m0.9515479s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-657900 --wait=true -v=8 --alsologtostderr
E0219 04:12:14.726501   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 04:14:05.285847   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 04:15:17.938802   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 04:15:25.046159   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 04:15:41.834907   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-657900 --wait=true -v=8 --alsologtostderr: (4m30.1292585s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-657900
multinode_test.go:305: reported node list is not the same after restart. Before restart: multinode-657900	172.28.246.233
multinode-657900-m02	172.28.248.228
multinode-657900-m03	172.28.246.126

                                                
                                                
After restart: multinode-657900	172.28.244.121
multinode-657900-m02	172.28.250.48
multinode-657900-m03	172.28.250.14
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-657900 -n multinode-657900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-657900 -n multinode-657900: (4.7282693s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 logs -n 25: (4.7184254s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:06 GMT | 19 Feb 23 04:06 GMT |
	|         | multinode-657900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:06 GMT | 19 Feb 23 04:06 GMT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:06 GMT | 19 Feb 23 04:06 GMT |
	|         | multinode-657900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:06 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900:/home/docker/cp-test_multinode-657900-m02_multinode-657900.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n multinode-657900 sudo cat                                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | /home/docker/cp-test_multinode-657900-m02_multinode-657900.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900-m03:/home/docker/cp-test_multinode-657900-m02_multinode-657900-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n multinode-657900-m03 sudo cat                                                                    | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | /home/docker/cp-test_multinode-657900-m02_multinode-657900-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-657900 cp testdata\cp-test.txt                                                                                 | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900:/home/docker/cp-test_multinode-657900-m03_multinode-657900.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | multinode-657900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n multinode-657900 sudo cat                                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:07 GMT |
	|         | /home/docker/cp-test_multinode-657900-m03_multinode-657900.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt                                                        | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:07 GMT | 19 Feb 23 04:08 GMT |
	|         | multinode-657900-m02:/home/docker/cp-test_multinode-657900-m03_multinode-657900-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n                                                                                                  | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:08 GMT | 19 Feb 23 04:08 GMT |
	|         | multinode-657900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-657900 ssh -n multinode-657900-m02 sudo cat                                                                    | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:08 GMT | 19 Feb 23 04:08 GMT |
	|         | /home/docker/cp-test_multinode-657900-m03_multinode-657900-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-657900 node stop m03                                                                                           | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:08 GMT | 19 Feb 23 04:08 GMT |
	| node    | multinode-657900 node start                                                                                              | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:08 GMT | 19 Feb 23 04:09 GMT |
	|         | m03 --alsologtostderr                                                                                                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-657900                                                                                                 | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:10 GMT |                     |
	| stop    | -p multinode-657900                                                                                                      | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:10 GMT | 19 Feb 23 04:11 GMT |
	| start   | -p multinode-657900                                                                                                      | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:11 GMT | 19 Feb 23 04:15 GMT |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-657900                                                                                                 | multinode-657900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:15 GMT |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 04:11:12
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 04:11:12.200397    8336 out.go:296] Setting OutFile to fd 836 ...
	I0219 04:11:12.261752    8336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:11:12.261752    8336 out.go:309] Setting ErrFile to fd 700...
	I0219 04:11:12.261752    8336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:11:12.282375    8336 out.go:303] Setting JSON to false
	I0219 04:11:12.285325    8336 start.go:125] hostinfo: {"hostname":"minikube1","uptime":16861,"bootTime":1676763010,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:11:12.285325    8336 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:11:12.290612    8336 out.go:177] * [multinode-657900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:11:12.294101    8336 notify.go:220] Checking for updates...
	I0219 04:11:12.297063    8336 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:11:12.299503    8336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:11:12.302361    8336 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:11:12.304654    8336 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:11:12.306223    8336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:11:12.310301    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:11:12.310960    8336 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:11:13.973078    8336 out.go:177] * Using the hyperv driver based on existing profile
	I0219 04:11:13.975340    8336 start.go:296] selected driver: hyperv
	I0219 04:11:13.975340    8336 start.go:857] validating driver "hyperv" against &{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.233 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.248.228 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.246.126 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false i
naccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:11:13.975597    8336 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:11:14.030686    8336 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0219 04:11:14.030686    8336 cni.go:84] Creating CNI manager for ""
	I0219 04:11:14.030686    8336 cni.go:136] 3 nodes found, recommending kindnet
	I0219 04:11:14.030686    8336 start_flags.go:319] config:
	{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.233 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.248.228 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.246.126 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false
kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:11:14.031480    8336 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:11:14.035977    8336 out.go:177] * Starting control plane node multinode-657900 in cluster multinode-657900
	I0219 04:11:14.038181    8336 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:11:14.038335    8336 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0219 04:11:14.038335    8336 cache.go:57] Caching tarball of preloaded images
	I0219 04:11:14.038335    8336 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:11:14.038923    8336 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 04:11:14.039089    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:11:14.040591    8336 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:11:14.041487    8336 start.go:364] acquiring machines lock for multinode-657900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:11:14.041582    8336 start.go:368] acquired machines lock for "multinode-657900" in 95.2µs
	I0219 04:11:14.041582    8336 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:11:14.041582    8336 fix.go:55] fixHost starting: 
	I0219 04:11:14.042332    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:14.733122    8336 main.go:141] libmachine: [stdout =====>] : Off
	
	I0219 04:11:14.733122    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:14.733122    8336 fix.go:103] recreateIfNeeded on multinode-657900: state=Stopped err=<nil>
	W0219 04:11:14.733122    8336 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:11:14.738903    8336 out.go:177] * Restarting existing hyperv VM for "multinode-657900" ...
	I0219 04:11:14.742437    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-657900
	I0219 04:11:16.358331    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:16.358331    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:16.358331    8336 main.go:141] libmachine: Waiting for host to start...
	I0219 04:11:16.358331    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:17.090760    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:17.090760    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:17.090760    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:18.134134    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:18.134134    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:19.135711    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:19.864440    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:19.864815    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:19.864872    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:20.879621    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:20.879621    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:21.894447    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:22.645651    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:22.645651    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:22.645751    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:23.664399    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:23.664399    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:24.670213    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:25.386407    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:25.386752    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:25.386837    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:26.423408    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:26.423517    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:27.429601    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:28.161828    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:28.161828    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:28.161828    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:29.214898    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:29.214898    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:30.222240    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:30.967031    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:30.967077    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:30.967164    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:31.982965    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:31.983006    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:32.997242    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:33.729235    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:33.729300    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:33.729365    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:34.750078    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:34.750182    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:35.751732    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:36.498524    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:36.498584    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:36.498584    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:37.514766    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:37.514766    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:38.515194    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:39.261100    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:39.261100    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:39.261202    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:40.297173    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:11:40.297173    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:41.298696    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:42.032940    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:42.033196    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:42.033196    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:43.160230    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:43.160390    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:43.162721    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:43.905917    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:43.905917    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:43.906048    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:44.981053    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:44.981234    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:44.981379    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:11:44.983924    8336 machine.go:88] provisioning docker machine ...
	I0219 04:11:44.983985    8336 buildroot.go:166] provisioning hostname "multinode-657900"
	I0219 04:11:44.983985    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:45.681499    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:45.681499    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:45.681499    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:46.727651    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:46.727705    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:46.731973    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:11:46.733123    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.121 22 <nil> <nil>}
	I0219 04:11:46.733123    8336 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-657900 && echo "multinode-657900" | sudo tee /etc/hostname
	I0219 04:11:46.912092    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-657900
	
	I0219 04:11:46.912092    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:47.643733    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:47.643733    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:47.643821    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:48.693158    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:48.693158    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:48.697748    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:11:48.698415    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.121 22 <nil> <nil>}
	I0219 04:11:48.698969    8336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-657900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-657900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-657900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:11:48.868017    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:11:48.868156    8336 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:11:48.868200    8336 buildroot.go:174] setting up certificates
	I0219 04:11:48.868256    8336 provision.go:83] configureAuth start
	I0219 04:11:48.868351    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:49.595722    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:49.595722    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:49.595805    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:50.620016    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:50.620016    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:50.620093    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:51.362245    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:51.362445    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:51.362445    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:52.424952    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:52.425100    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:52.425100    8336 provision.go:138] copyHostCerts
	I0219 04:11:52.425100    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0219 04:11:52.425100    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:11:52.425634    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:11:52.426003    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:11:52.426871    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0219 04:11:52.427399    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:11:52.427399    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:11:52.427620    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:11:52.428706    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0219 04:11:52.428944    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:11:52.428944    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:11:52.428944    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:11:52.430742    8336 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-657900 san=[172.28.244.121 172.28.244.121 localhost 127.0.0.1 minikube multinode-657900]
	I0219 04:11:52.521202    8336 provision.go:172] copyRemoteCerts
	I0219 04:11:52.530214    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:11:52.530214    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:53.294701    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:53.294701    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:53.294701    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:54.337071    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:54.337154    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:54.337206    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:11:54.460089    8336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9298819s)
	I0219 04:11:54.460183    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0219 04:11:54.460629    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:11:54.501629    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0219 04:11:54.502203    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0219 04:11:54.539874    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0219 04:11:54.539874    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:11:54.580907    8336 provision.go:86] duration metric: configureAuth took 5.7126386s
	I0219 04:11:54.580957    8336 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:11:54.581040    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:11:54.581040    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:55.316430    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:55.316430    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:55.316497    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:56.329647    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:56.329797    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:56.333784    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:11:56.334486    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.121 22 <nil> <nil>}
	I0219 04:11:56.334486    8336 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:11:56.484497    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:11:56.484591    8336 buildroot.go:70] root file system type: tmpfs
	I0219 04:11:56.484750    8336 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:11:56.484750    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:57.214801    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:57.214801    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:57.214885    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:11:58.260813    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:11:58.260813    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:58.265778    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:11:58.266527    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.121 22 <nil> <nil>}
	I0219 04:11:58.267096    8336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:11:58.449145    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:11:58.449284    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:11:59.170983    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:11:59.170983    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:11:59.170983    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:00.234588    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:12:00.234588    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:00.239218    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:12:00.239918    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.121 22 <nil> <nil>}
	I0219 04:12:00.239918    8336 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:12:01.698412    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:12:01.698412    8336 machine.go:91] provisioned docker machine in 16.7144838s
	I0219 04:12:01.698412    8336 start.go:300] post-start starting for "multinode-657900" (driver="hyperv")
	I0219 04:12:01.698412    8336 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:12:01.707472    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:12:01.707472    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:12:02.424576    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:02.424576    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:02.424659    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:03.471253    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:12:03.471253    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:03.471253    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:12:03.581870    8336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8744047s)
	I0219 04:12:03.589855    8336 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:12:03.596640    8336 command_runner.go:130] > NAME=Buildroot
	I0219 04:12:03.596640    8336 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0219 04:12:03.596640    8336 command_runner.go:130] > ID=buildroot
	I0219 04:12:03.596640    8336 command_runner.go:130] > VERSION_ID=2021.02.12
	I0219 04:12:03.596741    8336 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0219 04:12:03.596927    8336 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:12:03.597012    8336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:12:03.597183    8336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:12:03.598459    8336 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:12:03.598459    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /etc/ssl/certs/101482.pem
	I0219 04:12:03.607965    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:12:03.624662    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:12:03.664090    8336 start.go:303] post-start completed in 1.965685s
	I0219 04:12:03.664090    8336 fix.go:57] fixHost completed within 49.6226767s
	I0219 04:12:03.664090    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:12:04.387927    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:04.387927    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:04.388155    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:05.414473    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:12:05.414473    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:05.417944    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:12:05.418784    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.121 22 <nil> <nil>}
	I0219 04:12:05.418784    8336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:12:05.592278    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676779925.586582332
	
	I0219 04:12:05.592278    8336 fix.go:207] guest clock: 1676779925.586582332
	I0219 04:12:05.592278    8336 fix.go:220] Guest: 2023-02-19 04:12:05.586582332 +0000 GMT Remote: 2023-02-19 04:12:03.6640905 +0000 GMT m=+51.590401101 (delta=1.922491832s)
	I0219 04:12:05.592278    8336 fix.go:191] guest clock delta is within tolerance: 1.922491832s
	I0219 04:12:05.592278    8336 start.go:83] releasing machines lock for "multinode-657900", held for 51.5508716s
	I0219 04:12:05.592278    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:12:06.329844    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:06.329844    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:06.329844    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:07.374115    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:12:07.374115    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:07.377910    8336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:12:07.377910    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:12:07.385574    8336 ssh_runner.go:195] Run: cat /version.json
	I0219 04:12:07.385574    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:12:08.152858    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:08.152858    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:08.153030    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:08.153030    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:08.153030    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:08.153141    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:09.261339    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:12:09.261339    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:09.261647    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:12:09.282040    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:12:09.282066    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:09.282504    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:12:09.561695    8336 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0219 04:12:09.561695    8336 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1837928s)
	I0219 04:12:09.561958    8336 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
	I0219 04:12:09.562113    8336 ssh_runner.go:235] Completed: cat /version.json: (2.1765466s)
	I0219 04:12:09.573632    8336 ssh_runner.go:195] Run: systemctl --version
	I0219 04:12:09.589010    8336 command_runner.go:130] > systemd 247 (247)
	I0219 04:12:09.589010    8336 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0219 04:12:09.598756    8336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0219 04:12:09.605346    8336 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0219 04:12:09.606530    8336 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:12:09.616502    8336 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:12:09.621531    8336 command_runner.go:130] > /usr/bin/cri-dockerd
	I0219 04:12:09.631374    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:12:09.649225    8336 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:12:09.699892    8336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:12:09.724012    8336 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0219 04:12:09.724012    8336 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:12:09.724012    8336 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:12:09.732517    8336 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0219 04:12:09.764129    8336 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0219 04:12:09.764129    8336 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0219 04:12:09.764129    8336 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:12:09.764129    8336 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0219 04:12:09.764129    8336 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0219 04:12:09.764129    8336 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:12:09.764129    8336 start.go:485] detecting cgroup driver to use...
	I0219 04:12:09.764761    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:12:09.795861    8336 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:12:09.795861    8336 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:12:09.805349    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:12:09.836126    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:12:09.852524    8336 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:12:09.862629    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:12:09.887713    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:12:09.916889    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:12:09.945554    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:12:09.973221    8336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:12:10.003921    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:12:10.032947    8336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:12:10.047738    8336 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0219 04:12:10.058337    8336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:12:10.076145    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:12:10.265884    8336 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:12:10.296317    8336 start.go:485] detecting cgroup driver to use...
	I0219 04:12:10.306285    8336 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:12:10.326993    8336 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0219 04:12:10.327048    8336 command_runner.go:130] > [Unit]
	I0219 04:12:10.327048    8336 command_runner.go:130] > Description=Docker Application Container Engine
	I0219 04:12:10.327119    8336 command_runner.go:130] > Documentation=https://docs.docker.com
	I0219 04:12:10.327119    8336 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0219 04:12:10.327186    8336 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0219 04:12:10.327186    8336 command_runner.go:130] > StartLimitBurst=3
	I0219 04:12:10.327246    8336 command_runner.go:130] > StartLimitIntervalSec=60
	I0219 04:12:10.327246    8336 command_runner.go:130] > [Service]
	I0219 04:12:10.327284    8336 command_runner.go:130] > Type=notify
	I0219 04:12:10.327306    8336 command_runner.go:130] > Restart=on-failure
	I0219 04:12:10.327350    8336 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0219 04:12:10.327350    8336 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0219 04:12:10.327415    8336 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0219 04:12:10.327415    8336 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0219 04:12:10.327474    8336 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0219 04:12:10.327530    8336 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0219 04:12:10.327530    8336 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0219 04:12:10.327590    8336 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0219 04:12:10.327590    8336 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0219 04:12:10.327646    8336 command_runner.go:130] > ExecStart=
	I0219 04:12:10.327706    8336 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0219 04:12:10.327763    8336 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0219 04:12:10.327763    8336 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0219 04:12:10.327763    8336 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0219 04:12:10.327763    8336 command_runner.go:130] > LimitNOFILE=infinity
	I0219 04:12:10.327824    8336 command_runner.go:130] > LimitNPROC=infinity
	I0219 04:12:10.327824    8336 command_runner.go:130] > LimitCORE=infinity
	I0219 04:12:10.327879    8336 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0219 04:12:10.327879    8336 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0219 04:12:10.327938    8336 command_runner.go:130] > TasksMax=infinity
	I0219 04:12:10.327938    8336 command_runner.go:130] > TimeoutStartSec=0
	I0219 04:12:10.327993    8336 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0219 04:12:10.327993    8336 command_runner.go:130] > Delegate=yes
	I0219 04:12:10.328051    8336 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0219 04:12:10.328051    8336 command_runner.go:130] > KillMode=process
	I0219 04:12:10.328154    8336 command_runner.go:130] > [Install]
	I0219 04:12:10.328154    8336 command_runner.go:130] > WantedBy=multi-user.target
	I0219 04:12:10.338463    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:12:10.368171    8336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:12:10.409242    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:12:10.438675    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:12:10.475318    8336 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:12:10.539196    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:12:10.558668    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:12:10.588387    8336 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:12:10.588387    8336 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:12:10.599252    8336 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:12:10.777803    8336 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:12:10.945006    8336 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:12:10.945006    8336 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:12:10.984871    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:12:11.167628    8336 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:12:12.844165    8336 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6764596s)
	I0219 04:12:12.855816    8336 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:12:13.045724    8336 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:12:13.225478    8336 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:12:13.401339    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:12:13.581758    8336 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:12:13.608082    8336 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:12:13.618981    8336 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:12:13.627618    8336 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0219 04:12:13.627618    8336 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0219 04:12:13.627618    8336 command_runner.go:130] > Device: 16h/22d	Inode: 899         Links: 1
	I0219 04:12:13.627721    8336 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0219 04:12:13.627721    8336 command_runner.go:130] > Access: 2023-02-19 04:12:13.596443931 +0000
	I0219 04:12:13.627721    8336 command_runner.go:130] > Modify: 2023-02-19 04:12:13.596443931 +0000
	I0219 04:12:13.627721    8336 command_runner.go:130] > Change: 2023-02-19 04:12:13.599443579 +0000
	I0219 04:12:13.627721    8336 command_runner.go:130] >  Birth: -
	I0219 04:12:13.627721    8336 start.go:553] Will wait 60s for crictl version
	I0219 04:12:13.636114    8336 ssh_runner.go:195] Run: which crictl
	I0219 04:12:13.642186    8336 command_runner.go:130] > /usr/bin/crictl
	I0219 04:12:13.653501    8336 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:12:13.798390    8336 command_runner.go:130] > Version:  0.1.0
	I0219 04:12:13.798473    8336 command_runner.go:130] > RuntimeName:  docker
	I0219 04:12:13.798473    8336 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0219 04:12:13.798473    8336 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0219 04:12:13.798549    8336 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:12:13.806563    8336 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:12:13.847789    8336 command_runner.go:130] > 20.10.23
	I0219 04:12:13.855853    8336 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:12:13.893381    8336 command_runner.go:130] > 20.10.23
	I0219 04:12:13.898856    8336 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:12:13.899051    8336 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:12:13.905382    8336 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:12:13.905382    8336 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:12:13.905382    8336 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:12:13.905382    8336 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:12:13.907775    8336 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:12:13.907775    8336 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:12:13.918375    8336 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:12:13.923718    8336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:12:13.944434    8336 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:12:13.951464    8336 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0219 04:12:13.984533    8336 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0219 04:12:13.984533    8336 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0219 04:12:13.985202    8336 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:12:13.985202    8336 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0219 04:12:13.985241    8336 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0219 04:12:13.985319    8336 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:12:13.993928    8336 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:12:14.028745    8336 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0219 04:12:14.028745    8336 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0219 04:12:14.028745    8336 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0219 04:12:14.028841    8336 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0219 04:12:14.028841    8336 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0219 04:12:14.028872    8336 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0219 04:12:14.028872    8336 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0219 04:12:14.028872    8336 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0219 04:12:14.028872    8336 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0219 04:12:14.028872    8336 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:12:14.028928    8336 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0219 04:12:14.028968    8336 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0219 04:12:14.029007    8336 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:12:14.037086    8336 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:12:14.084401    8336 command_runner.go:130] > cgroupfs
	I0219 04:12:14.084401    8336 cni.go:84] Creating CNI manager for ""
	I0219 04:12:14.084401    8336 cni.go:136] 3 nodes found, recommending kindnet
	I0219 04:12:14.084401    8336 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:12:14.084401    8336 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.244.121 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-657900 NodeName:multinode-657900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.244.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.244.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:12:14.085224    8336 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.244.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-657900"
	  kubeletExtraArgs:
	    node-ip: 172.28.244.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.244.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:12:14.085305    8336 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-657900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.244.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:12:14.095858    8336 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:12:14.114248    8336 command_runner.go:130] > kubeadm
	I0219 04:12:14.114248    8336 command_runner.go:130] > kubectl
	I0219 04:12:14.114248    8336 command_runner.go:130] > kubelet
	I0219 04:12:14.114248    8336 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:12:14.124525    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:12:14.143919    8336 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0219 04:12:14.175354    8336 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:12:14.211698    8336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0219 04:12:14.262941    8336 ssh_runner.go:195] Run: grep 172.28.244.121	control-plane.minikube.internal$ /etc/hosts
	I0219 04:12:14.269455    8336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.244.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:12:14.289402    8336 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900 for IP: 172.28.244.121
	I0219 04:12:14.289402    8336 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:12:14.290241    8336 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:12:14.290971    8336 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:12:14.291908    8336 certs.go:311] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\client.key
	I0219 04:12:14.291995    8336 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.8cd7bc6f
	I0219 04:12:14.292140    8336 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.8cd7bc6f with IP's: [172.28.244.121 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:12:14.481519    8336 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.8cd7bc6f ...
	I0219 04:12:14.481519    8336 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.8cd7bc6f: {Name:mkdcf59a920d9ff5e3ef54591581db7bae0d6390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:12:14.483676    8336 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.8cd7bc6f ...
	I0219 04:12:14.483676    8336 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.8cd7bc6f: {Name:mk360d0ac55b309fdb71206496de58cae494c661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:12:14.484051    8336 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt.8cd7bc6f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt
	I0219 04:12:14.492645    8336 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key.8cd7bc6f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key
	I0219 04:12:14.493347    8336 certs.go:311] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key
	I0219 04:12:14.493347    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0219 04:12:14.494452    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0219 04:12:14.494632    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0219 04:12:14.494632    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0219 04:12:14.495634    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0219 04:12:14.495795    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0219 04:12:14.496028    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0219 04:12:14.496028    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0219 04:12:14.496404    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:12:14.496404    8336 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:12:14.496980    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:12:14.497128    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:12:14.497419    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:12:14.497715    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:12:14.497975    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:12:14.497975    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem -> /usr/share/ca-certificates/10148.pem
	I0219 04:12:14.498715    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /usr/share/ca-certificates/101482.pem
	I0219 04:12:14.498868    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:12:14.499042    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:12:14.543909    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:12:14.584412    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:12:14.625632    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0219 04:12:14.666207    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:12:14.710899    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:12:14.762269    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:12:14.808082    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:12:14.847784    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:12:14.888031    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:12:14.929045    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:12:14.969761    8336 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:12:15.012986    8336 ssh_runner.go:195] Run: openssl version
	I0219 04:12:15.021770    8336 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0219 04:12:15.030017    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:12:15.069657    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:12:15.077288    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:12:15.077288    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:12:15.087779    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:12:15.096063    8336 command_runner.go:130] > 51391683
	I0219 04:12:15.105236    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:12:15.133879    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:12:15.161643    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:12:15.167554    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:12:15.167639    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:12:15.176833    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:12:15.185179    8336 command_runner.go:130] > 3ec20f2e
	I0219 04:12:15.194778    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:12:15.226176    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:12:15.254446    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:12:15.266833    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:12:15.267826    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:12:15.275838    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:12:15.282845    8336 command_runner.go:130] > b5213941
	I0219 04:12:15.290838    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:12:15.308537    8336 kubeadm.go:401] StartCluster: {Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.244.121 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.248.228 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.246.126 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false i
ngress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP:}
	I0219 04:12:15.315823    8336 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:12:15.358818    8336 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:12:15.380829    8336 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0219 04:12:15.380829    8336 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0219 04:12:15.380829    8336 command_runner.go:130] > /var/lib/minikube/etcd:
	I0219 04:12:15.380829    8336 command_runner.go:130] > member
	I0219 04:12:15.380829    8336 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0219 04:12:15.380829    8336 kubeadm.go:633] restartCluster start
	I0219 04:12:15.388818    8336 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0219 04:12:15.403701    8336 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:12:15.404537    8336 kubeconfig.go:135] verify returned: extract IP: "multinode-657900" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:12:15.405005    8336 kubeconfig.go:146] "multinode-657900" context is missing from C:\Users\jenkins.minikube1\minikube-integration\kubeconfig - will repair!
	I0219 04:12:15.405684    8336 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:12:15.414624    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:12:15.414624    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:12:15.416619    8336 cert_rotation.go:137] Starting client certificate rotation controller
	I0219 04:12:15.424620    8336 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0219 04:12:15.440973    8336 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0219 04:12:15.440973    8336 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0219 04:12:15.441050    8336 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0219 04:12:15.441050    8336 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0219 04:12:15.441050    8336 command_runner.go:130] >  kind: InitConfiguration
	I0219 04:12:15.441050    8336 command_runner.go:130] >  localAPIEndpoint:
	I0219 04:12:15.441106    8336 command_runner.go:130] > -  advertiseAddress: 172.28.246.233
	I0219 04:12:15.441161    8336 command_runner.go:130] > +  advertiseAddress: 172.28.244.121
	I0219 04:12:15.441161    8336 command_runner.go:130] >    bindPort: 8443
	I0219 04:12:15.441183    8336 command_runner.go:130] >  bootstrapTokens:
	I0219 04:12:15.441183    8336 command_runner.go:130] >    - groups:
	I0219 04:12:15.441210    8336 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0219 04:12:15.441210    8336 command_runner.go:130] >    criSocket: /var/run/cri-dockerd.sock
	I0219 04:12:15.441210    8336 command_runner.go:130] >    name: "multinode-657900"
	I0219 04:12:15.441210    8336 command_runner.go:130] >    kubeletExtraArgs:
	I0219 04:12:15.441210    8336 command_runner.go:130] > -    node-ip: 172.28.246.233
	I0219 04:12:15.441210    8336 command_runner.go:130] > +    node-ip: 172.28.244.121
	I0219 04:12:15.441210    8336 command_runner.go:130] >    taints: []
	I0219 04:12:15.441210    8336 command_runner.go:130] >  ---
	I0219 04:12:15.441210    8336 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0219 04:12:15.441210    8336 command_runner.go:130] >  kind: ClusterConfiguration
	I0219 04:12:15.441210    8336 command_runner.go:130] >  apiServer:
	I0219 04:12:15.441210    8336 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.246.233"]
	I0219 04:12:15.441210    8336 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.244.121"]
	I0219 04:12:15.441210    8336 command_runner.go:130] >    extraArgs:
	I0219 04:12:15.441210    8336 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0219 04:12:15.441210    8336 command_runner.go:130] >  controllerManager:
	I0219 04:12:15.441210    8336 kubeadm.go:599] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.246.233
	+  advertiseAddress: 172.28.244.121
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: /var/run/cri-dockerd.sock
	   name: "multinode-657900"
	   kubeletExtraArgs:
	-    node-ip: 172.28.246.233
	+    node-ip: 172.28.244.121
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.246.233"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.244.121"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0219 04:12:15.441210    8336 kubeadm.go:1120] stopping kube-system containers ...
	I0219 04:12:15.447825    8336 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:12:15.480658    8336 command_runner.go:130] > 0eb749d12a49
	I0219 04:12:15.480658    8336 command_runner.go:130] > b0d34c23d93e
	I0219 04:12:15.480658    8336 command_runner.go:130] > e4a0c5408369
	I0219 04:12:15.480658    8336 command_runner.go:130] > addeabc5c2e0
	I0219 04:12:15.480658    8336 command_runner.go:130] > 3cc329202fb1
	I0219 04:12:15.480658    8336 command_runner.go:130] > ca0d83d4d696
	I0219 04:12:15.480658    8336 command_runner.go:130] > 7c26aac822f9
	I0219 04:12:15.480658    8336 command_runner.go:130] > 35c5df6e4d7f
	I0219 04:12:15.480658    8336 command_runner.go:130] > 4c9cc5564cf4
	I0219 04:12:15.480658    8336 command_runner.go:130] > 2f34e1aaa1b5
	I0219 04:12:15.480658    8336 command_runner.go:130] > 105abb87f41f
	I0219 04:12:15.480658    8336 command_runner.go:130] > 55e12988bbae
	I0219 04:12:15.480658    8336 command_runner.go:130] > 0f7e494e0221
	I0219 04:12:15.480658    8336 command_runner.go:130] > 9cad608b4ab6
	I0219 04:12:15.480658    8336 command_runner.go:130] > a766f49230c1
	I0219 04:12:15.480658    8336 command_runner.go:130] > ff2f979f1ced
	I0219 04:12:15.480658    8336 docker.go:456] Stopping containers: [0eb749d12a49 b0d34c23d93e e4a0c5408369 addeabc5c2e0 3cc329202fb1 ca0d83d4d696 7c26aac822f9 35c5df6e4d7f 4c9cc5564cf4 2f34e1aaa1b5 105abb87f41f 55e12988bbae 0f7e494e0221 9cad608b4ab6 a766f49230c1 ff2f979f1ced]
	I0219 04:12:15.488463    8336 ssh_runner.go:195] Run: docker stop 0eb749d12a49 b0d34c23d93e e4a0c5408369 addeabc5c2e0 3cc329202fb1 ca0d83d4d696 7c26aac822f9 35c5df6e4d7f 4c9cc5564cf4 2f34e1aaa1b5 105abb87f41f 55e12988bbae 0f7e494e0221 9cad608b4ab6 a766f49230c1 ff2f979f1ced
	I0219 04:12:15.523336    8336 command_runner.go:130] > 0eb749d12a49
	I0219 04:12:15.523420    8336 command_runner.go:130] > b0d34c23d93e
	I0219 04:12:15.523420    8336 command_runner.go:130] > e4a0c5408369
	I0219 04:12:15.523420    8336 command_runner.go:130] > addeabc5c2e0
	I0219 04:12:15.523469    8336 command_runner.go:130] > 3cc329202fb1
	I0219 04:12:15.523469    8336 command_runner.go:130] > ca0d83d4d696
	I0219 04:12:15.523469    8336 command_runner.go:130] > 7c26aac822f9
	I0219 04:12:15.523469    8336 command_runner.go:130] > 35c5df6e4d7f
	I0219 04:12:15.523469    8336 command_runner.go:130] > 4c9cc5564cf4
	I0219 04:12:15.523469    8336 command_runner.go:130] > 2f34e1aaa1b5
	I0219 04:12:15.523469    8336 command_runner.go:130] > 105abb87f41f
	I0219 04:12:15.523469    8336 command_runner.go:130] > 55e12988bbae
	I0219 04:12:15.523469    8336 command_runner.go:130] > 0f7e494e0221
	I0219 04:12:15.523469    8336 command_runner.go:130] > 9cad608b4ab6
	I0219 04:12:15.523469    8336 command_runner.go:130] > a766f49230c1
	I0219 04:12:15.523469    8336 command_runner.go:130] > ff2f979f1ced
	I0219 04:12:15.532171    8336 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0219 04:12:15.565179    8336 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:12:15.579561    8336 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0219 04:12:15.579561    8336 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0219 04:12:15.579561    8336 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0219 04:12:15.579561    8336 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:12:15.579663    8336 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:12:15.588239    8336 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:12:15.605011    8336 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0219 04:12:15.605077    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:12:15.975920    8336 command_runner.go:130] ! W0219 04:12:15.969124    1388 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:12:15.995980    8336 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:12:15.995980    8336 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0219 04:12:15.996032    8336 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0219 04:12:15.996032    8336 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0219 04:12:15.996032    8336 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0219 04:12:15.996071    8336 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0219 04:12:15.996130    8336 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0219 04:12:15.996185    8336 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0219 04:12:15.996185    8336 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0219 04:12:15.996185    8336 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0219 04:12:15.996264    8336 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0219 04:12:15.996264    8336 command_runner.go:130] > [certs] Using the existing "sa" key
	I0219 04:12:15.996348    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:12:16.068100    8336 command_runner.go:130] ! W0219 04:12:16.061261    1394 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:12:17.500584    8336 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:12:17.500679    8336 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:12:17.500679    8336 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:12:17.500679    8336 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:12:17.500734    8336 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:12:17.500764    8336 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.5044209s)
	I0219 04:12:17.500826    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:12:17.579640    8336 command_runner.go:130] ! W0219 04:12:17.572493    1400 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:12:17.797554    8336 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:12:17.797554    8336 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:12:17.797554    8336 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0219 04:12:17.797554    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:12:17.901692    8336 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:12:17.901692    8336 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:12:17.908697    8336 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:12:17.910705    8336 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:12:17.915764    8336 command_runner.go:130] ! W0219 04:12:17.883039    1422 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:12:17.915821    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:12:17.993029    8336 command_runner.go:130] ! W0219 04:12:17.985919    1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:12:18.017022    8336 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:12:18.017022    8336 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:12:18.026027    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:18.569942    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:19.063609    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:19.571696    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:20.068203    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:20.557411    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:21.066175    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:21.558440    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:22.067611    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:22.090147    8336 command_runner.go:130] > 1827
	I0219 04:12:22.090147    8336 api_server.go:71] duration metric: took 4.0731382s to wait for apiserver process to appear ...
	I0219 04:12:22.090147    8336 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:12:22.090147    8336 api_server.go:252] Checking apiserver healthz at https://172.28.244.121:8443/healthz ...
	I0219 04:12:27.105983    8336 api_server.go:268] stopped: https://172.28.244.121:8443/healthz: Get "https://172.28.244.121:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0219 04:12:27.608201    8336 api_server.go:252] Checking apiserver healthz at https://172.28.244.121:8443/healthz ...
	I0219 04:12:27.617913    8336 api_server.go:278] https://172.28.244.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:12:27.618023    8336 api_server.go:102] status: https://172.28.244.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:12:28.116796    8336 api_server.go:252] Checking apiserver healthz at https://172.28.244.121:8443/healthz ...
	I0219 04:12:28.125363    8336 api_server.go:278] https://172.28.244.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:12:28.125417    8336 api_server.go:102] status: https://172.28.244.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:12:28.608573    8336 api_server.go:252] Checking apiserver healthz at https://172.28.244.121:8443/healthz ...
	I0219 04:12:28.618031    8336 api_server.go:278] https://172.28.244.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:12:28.618031    8336 api_server.go:102] status: https://172.28.244.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:12:29.119474    8336 api_server.go:252] Checking apiserver healthz at https://172.28.244.121:8443/healthz ...
	I0219 04:12:29.129152    8336 api_server.go:278] https://172.28.244.121:8443/healthz returned 200:
	ok
	I0219 04:12:29.129642    8336 round_trippers.go:463] GET https://172.28.244.121:8443/version
	I0219 04:12:29.129642    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:29.129642    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:29.129642    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:29.146509    8336 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0219 04:12:29.146509    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:29.146912    8336 round_trippers.go:580]     Content-Length: 263
	I0219 04:12:29.146912    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:29 GMT
	I0219 04:12:29.146912    8336 round_trippers.go:580]     Audit-Id: 90389462-cf40-46bf-babd-2f96e2200db3
	I0219 04:12:29.146912    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:29.147001    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:29.147001    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:29.147001    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:29.147075    8336 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0219 04:12:29.147252    8336 api_server.go:140] control plane version: v1.26.1
	I0219 04:12:29.147325    8336 api_server.go:130] duration metric: took 7.0572025s to wait for apiserver health ...
	I0219 04:12:29.147325    8336 cni.go:84] Creating CNI manager for ""
	I0219 04:12:29.147325    8336 cni.go:136] 3 nodes found, recommending kindnet
	I0219 04:12:29.150034    8336 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0219 04:12:29.161407    8336 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0219 04:12:29.170431    8336 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0219 04:12:29.170431    8336 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0219 04:12:29.170431    8336 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0219 04:12:29.170431    8336 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0219 04:12:29.170431    8336 command_runner.go:130] > Access: 2023-02-19 04:11:42.350359200 +0000
	I0219 04:12:29.170431    8336 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0219 04:12:29.170431    8336 command_runner.go:130] > Change: 2023-02-19 04:11:32.681000000 +0000
	I0219 04:12:29.170431    8336 command_runner.go:130] >  Birth: -
	I0219 04:12:29.171404    8336 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0219 04:12:29.171404    8336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0219 04:12:29.232140    8336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0219 04:12:31.044422    8336 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:12:31.044422    8336 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:12:31.044422    8336 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0219 04:12:31.044422    8336 command_runner.go:130] > daemonset.apps/kindnet configured
	I0219 04:12:31.044422    8336 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.8122885s)
	I0219 04:12:31.044422    8336 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:12:31.044422    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:12:31.044422    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.044422    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.044422    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.051014    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:12:31.051451    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.051451    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.051451    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.051451    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.051451    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.051451    8336 round_trippers.go:580]     Audit-Id: f0e94098-ca92-49c7-857d-70b0bcedb3cd
	I0219 04:12:31.051451    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.053039    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1172"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 81722 chars]
	I0219 04:12:31.061209    8336 system_pods.go:59] 12 kube-system pods found
	I0219 04:12:31.062108    8336 system_pods.go:61] "coredns-787d4945fb-9mvfg" [38bce706-085e-44e0-bf5e-97cbdebb682e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0219 04:12:31.062108    8336 system_pods.go:61] "etcd-multinode-657900" [e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32] Pending
	I0219 04:12:31.062108    8336 system_pods.go:61] "kindnet-fp2c9" [fabe9c73-4899-458b-b4ed-16d65d69e5d9] Running
	I0219 04:12:31.062108    8336 system_pods.go:61] "kindnet-lvjng" [df7a9269-516f-4b66-af0f-429b21ee31cc] Running
	I0219 04:12:31.062108    8336 system_pods.go:61] "kindnet-zvk4x" [de4adab4-766a-4c34-b827-9bedc6468779] Running
	I0219 04:12:31.062108    8336 system_pods.go:61] "kube-apiserver-multinode-657900" [e47db067-f2ff-412b-954f-0b6b6cf42f8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0219 04:12:31.062108    8336 system_pods.go:61] "kube-controller-manager-multinode-657900" [463b901e-dd04-46fc-91a3-9917b12590ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0219 04:12:31.062108    8336 system_pods.go:61] "kube-proxy-8h9z4" [5ff10d29-0b2a-4046-a946-90b1a4d8bcb7] Running
	I0219 04:12:31.062108    8336 system_pods.go:61] "kube-proxy-kcm8m" [8ce14b4f-6df3-4822-ac2b-06f3417e8eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0219 04:12:31.062108    8336 system_pods.go:61] "kube-proxy-n5vsl" [8757301c-e7d4-4784-8e1b-8e1f24aeabcd] Running
	I0219 04:12:31.062108    8336 system_pods.go:61] "kube-scheduler-multinode-657900" [ba38eff9-ab82-463a-bb6f-8af5e4599f15] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0219 04:12:31.062108    8336 system_pods.go:61] "storage-provisioner" [4fcb063a-be6a-41e8-9379-c8f7cf16a165] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0219 04:12:31.062108    8336 system_pods.go:74] duration metric: took 17.6857ms to wait for pod list to return data ...
	I0219 04:12:31.062108    8336 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:12:31.062108    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes
	I0219 04:12:31.062108    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.062108    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.062108    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.066709    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:31.066709    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.066709    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.066709    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.066709    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.066709    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.066709    8336 round_trippers.go:580]     Audit-Id: d8dd8375-212f-4d97-b379-24141be2e825
	I0219 04:12:31.067701    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.069052    8336 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1172"},"items":[{"metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16403 chars]
	I0219 04:12:31.070497    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:12:31.070497    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:12:31.070497    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:12:31.070497    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:12:31.070615    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:12:31.070615    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:12:31.070615    8336 node_conditions.go:105] duration metric: took 8.5073ms to run NodePressure ...
	I0219 04:12:31.070672    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:12:31.176752    8336 command_runner.go:130] ! W0219 04:12:31.168951    2736 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:12:31.471697    8336 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0219 04:12:31.471697    8336 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0219 04:12:31.473720    8336 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0219 04:12:31.473720    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0219 04:12:31.473720    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.473720    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.473720    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.478702    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:31.478702    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.478908    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.478908    8336 round_trippers.go:580]     Audit-Id: 552fe691-fd87-4321-871a-3c97a27d988d
	I0219 04:12:31.478908    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.478908    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.478908    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.478908    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.479994    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1177"},"items":[{"metadata":{"name":"etcd-multinode-657900","namespace":"kube-system","uid":"e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32","resourceVersion":"1177","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.244.121:2379","kubernetes.io/config.hash":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.mirror":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.seen":"2023-02-19T04:12:18.622144946Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29313 chars]
	I0219 04:12:31.479994    8336 kubeadm.go:784] kubelet initialised
	I0219 04:12:31.479994    8336 kubeadm.go:785] duration metric: took 6.2742ms waiting for restarted kubelet to initialise ...
	I0219 04:12:31.479994    8336 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:12:31.479994    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:12:31.479994    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.479994    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.479994    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.487713    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:12:31.487713    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.487713    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.487713    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.487713    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.488738    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.488738    8336 round_trippers.go:580]     Audit-Id: 62a04a1e-4b27-4a9a-8223-b9a5b665a03b
	I0219 04:12:31.488738    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.489703    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1177"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83696 chars]
	I0219 04:12:31.493707    8336 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:31.493707    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:31.493707    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.493707    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.493707    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.496707    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:31.496707    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.496707    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.496707    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.496707    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.496707    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.496707    8336 round_trippers.go:580]     Audit-Id: e5519610-0836-4f69-8cdc-aead9784b2e6
	I0219 04:12:31.496707    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.496947    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:31.497594    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:31.497662    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.497662    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.497662    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.500688    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:31.500766    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.500766    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.500766    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.500813    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.500813    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.500813    8336 round_trippers.go:580]     Audit-Id: 6e7a4cbe-1635-4017-ba8b-8684679cd462
	I0219 04:12:31.500848    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.500848    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:31.501548    8336 pod_ready.go:97] node "multinode-657900" hosting pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.501655    8336 pod_ready.go:81] duration metric: took 7.9478ms waiting for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	E0219 04:12:31.501655    8336 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-657900" hosting pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.501655    8336 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:31.501751    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-657900
	I0219 04:12:31.501751    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.501822    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.501822    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.508931    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:12:31.508931    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.508931    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.508931    8336 round_trippers.go:580]     Audit-Id: 065f6df2-0d05-4952-8bce-0ecb4763860d
	I0219 04:12:31.508931    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.508931    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.508931    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.508931    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.508931    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-657900","namespace":"kube-system","uid":"e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32","resourceVersion":"1177","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.244.121:2379","kubernetes.io/config.hash":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.mirror":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.seen":"2023-02-19T04:12:18.622144946Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6081 chars]
	I0219 04:12:31.508931    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:31.508931    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.508931    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.508931    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.514931    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:12:31.514931    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.514931    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.514931    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.514931    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.514931    8336 round_trippers.go:580]     Audit-Id: 8761bd26-e81a-4515-b5a3-5263190d4a3c
	I0219 04:12:31.514931    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.514931    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.514931    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:31.514931    8336 pod_ready.go:97] node "multinode-657900" hosting pod "etcd-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.515943    8336 pod_ready.go:81] duration metric: took 14.2884ms waiting for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	E0219 04:12:31.515943    8336 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-657900" hosting pod "etcd-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.515943    8336 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:31.515943    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-657900
	I0219 04:12:31.515943    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.515943    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.515943    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.518920    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:31.519005    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.519005    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.519005    8336 round_trippers.go:580]     Audit-Id: f7a1fde8-c000-4d0e-b132-c6d43cfd0d84
	I0219 04:12:31.519005    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.519005    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.519092    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.519092    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.519361    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-657900","namespace":"kube-system","uid":"e47db067-f2ff-412b-954f-0b6b6cf42f8b","resourceVersion":"1165","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.244.121:8443","kubernetes.io/config.hash":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.mirror":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.seen":"2023-02-19T04:12:18.621131732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7637 chars]
	I0219 04:12:31.520122    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:31.520122    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.520122    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.520122    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.522726    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:31.522992    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.522992    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.522992    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.522992    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.522992    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.523062    8336 round_trippers.go:580]     Audit-Id: eaaaeafe-84f0-43ee-84a4-4aa5d663745d
	I0219 04:12:31.523062    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.523308    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:31.523699    8336 pod_ready.go:97] node "multinode-657900" hosting pod "kube-apiserver-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.523765    8336 pod_ready.go:81] duration metric: took 7.822ms waiting for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	E0219 04:12:31.523765    8336 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-657900" hosting pod "kube-apiserver-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.523765    8336 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:31.523899    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-657900
	I0219 04:12:31.523899    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.523899    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.523899    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.528846    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:31.528846    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.528846    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.528846    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.528846    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.528846    8336 round_trippers.go:580]     Audit-Id: aa7a2bfa-21f9-4711-aada-ac40945a403d
	I0219 04:12:31.528846    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.529396    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.529692    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-657900","namespace":"kube-system","uid":"463b901e-dd04-46fc-91a3-9917b12590ff","resourceVersion":"1162","creationTimestamp":"2023-02-19T04:00:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.mirror":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.seen":"2023-02-19T04:00:19.445306645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7425 chars]
	I0219 04:12:31.530300    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:31.530300    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.530300    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.530300    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.532476    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:31.533500    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.533500    8336 round_trippers.go:580]     Audit-Id: 7ab517e9-c3b1-4c2e-a327-a801aa078c6a
	I0219 04:12:31.533500    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.533500    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.533500    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.533500    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.533500    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.533500    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:31.533500    8336 pod_ready.go:97] node "multinode-657900" hosting pod "kube-controller-manager-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.533500    8336 pod_ready.go:81] duration metric: took 9.6682ms waiting for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	E0219 04:12:31.533500    8336 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-657900" hosting pod "kube-controller-manager-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:31.533500    8336 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:31.688610    8336 request.go:622] Waited for 154.8157ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:12:31.688769    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:12:31.688769    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.688769    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.688769    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.696664    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:12:31.696664    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.696664    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.696664    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.696664    8336 round_trippers.go:580]     Audit-Id: d40dcb13-fb84-4b4e-a694-6639cfcfd48b
	I0219 04:12:31.696664    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.696664    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.696664    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.697222    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h9z4","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ff10d29-0b2a-4046-a946-90b1a4d8bcb7","resourceVersion":"541","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0219 04:12:31.876277    8336 request.go:622] Waited for 178.742ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:12:31.876578    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:12:31.876578    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:31.876578    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:31.876578    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:31.881347    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:31.881843    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:31.881843    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:31.881843    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:31.881843    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:31 GMT
	I0219 04:12:31.881843    8336 round_trippers.go:580]     Audit-Id: 746c3297-04a7-40e0-a4aa-5df75846a8ee
	I0219 04:12:31.881843    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:31.881843    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:31.881843    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"946","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0219 04:12:31.882416    8336 pod_ready.go:92] pod "kube-proxy-8h9z4" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:31.882416    8336 pod_ready.go:81] duration metric: took 348.9173ms waiting for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:31.882565    8336 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:32.079819    8336 request.go:622] Waited for 197.182ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:12:32.080079    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:12:32.080079    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:32.080079    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:32.080079    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:32.084572    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:32.084572    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:32.084572    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:32.084572    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:32.084666    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:32 GMT
	I0219 04:12:32.084666    8336 round_trippers.go:580]     Audit-Id: b2df4eff-a969-424d-92db-3f6b2de4c1fd
	I0219 04:12:32.084688    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:32.084688    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:32.084869    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kcm8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa","resourceVersion":"1159","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I0219 04:12:32.284990    8336 request.go:622] Waited for 199.1002ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:32.285358    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:32.285358    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:32.285462    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:32.285462    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:32.293165    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:12:32.293428    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:32.293428    8336 round_trippers.go:580]     Audit-Id: 082ef5a8-6ec2-4384-9b80-7c9d2309b4db
	I0219 04:12:32.293428    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:32.293428    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:32.293428    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:32.293517    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:32.293517    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:32 GMT
	I0219 04:12:32.293816    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:32.294410    8336 pod_ready.go:97] node "multinode-657900" hosting pod "kube-proxy-kcm8m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:32.294410    8336 pod_ready.go:81] duration metric: took 411.8469ms waiting for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	E0219 04:12:32.294410    8336 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-657900" hosting pod "kube-proxy-kcm8m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:32.294410    8336 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:32.474030    8336 request.go:622] Waited for 179.3182ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:12:32.474296    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:12:32.474296    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:32.474296    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:32.474296    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:32.477720    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:32.477720    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:32.477720    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:32.477720    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:32.477953    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:32 GMT
	I0219 04:12:32.477953    8336 round_trippers.go:580]     Audit-Id: 452e00f6-1319-4621-9fd4-bf3ddf043e80
	I0219 04:12:32.477953    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:32.477953    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:32.478224    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n5vsl","generateName":"kube-proxy-","namespace":"kube-system","uid":"8757301c-e7d4-4784-8e1b-8e1f24aeabcd","resourceVersion":"1090","creationTimestamp":"2023-02-19T04:05:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:05:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0219 04:12:32.682174    8336 request.go:622] Waited for 203.1728ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:12:32.682174    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:12:32.682174    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:32.682174    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:32.682174    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:32.685861    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:32.685861    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:32.685861    8336 round_trippers.go:580]     Audit-Id: e052809c-e906-491a-8064-45b38a002b12
	I0219 04:12:32.685861    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:32.685861    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:32.686754    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:32.686754    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:32.686754    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:32 GMT
	I0219 04:12:32.686963    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"5442a324-219c-450a-bc84-42446fe87d39","resourceVersion":"1103","creationTimestamp":"2023-02-19T04:09:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:09:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4331 chars]
	I0219 04:12:32.687537    8336 pod_ready.go:92] pod "kube-proxy-n5vsl" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:32.687537    8336 pod_ready.go:81] duration metric: took 393.0636ms waiting for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:32.687604    8336 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:32.885208    8336 request.go:622] Waited for 197.3791ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:12:32.885374    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:12:32.885374    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:32.885374    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:32.885374    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:32.888075    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:32.888075    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:32.888075    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:32 GMT
	I0219 04:12:32.888075    8336 round_trippers.go:580]     Audit-Id: f48b7f1e-10d6-4f2d-9dd2-1dbfccd4fa09
	I0219 04:12:32.888075    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:32.888075    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:32.888460    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:32.888460    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:32.889105    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-657900","namespace":"kube-system","uid":"ba38eff9-ab82-463a-bb6f-8af5e4599f15","resourceVersion":"1172","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.mirror":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.seen":"2023-02-19T04:00:19.445308045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5137 chars]
	I0219 04:12:33.075491    8336 request.go:622] Waited for 186.1931ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:33.075587    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:33.075587    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:33.075587    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:33.075587    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:33.079902    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:33.079902    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:33.080144    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:33.080144    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:33.080144    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:33.080144    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:33 GMT
	I0219 04:12:33.080225    8336 round_trippers.go:580]     Audit-Id: b2e96e00-d78a-4f45-93d3-3e28ba21dd2f
	I0219 04:12:33.080249    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:33.080495    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:33.081171    8336 pod_ready.go:97] node "multinode-657900" hosting pod "kube-scheduler-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:33.081256    8336 pod_ready.go:81] duration metric: took 393.6537ms waiting for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	E0219 04:12:33.081256    8336 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-657900" hosting pod "kube-scheduler-multinode-657900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900" has status "Ready":"False"
	I0219 04:12:33.081256    8336 pod_ready.go:38] duration metric: took 1.601267s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:12:33.081344    8336 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:12:33.101226    8336 command_runner.go:130] > -16
	I0219 04:12:33.101375    8336 ops.go:34] apiserver oom_adj: -16
	I0219 04:12:33.101375    8336 kubeadm.go:637] restartCluster took 17.7206059s
	I0219 04:12:33.101375    8336 kubeadm.go:403] StartCluster complete in 17.7928985s
	I0219 04:12:33.101375    8336 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:12:33.101653    8336 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:12:33.103709    8336 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:12:33.105113    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:12:33.105113    8336 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:12:33.105744    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:12:33.109982    8336 out.go:177] * Enabled addons: 
	I0219 04:12:33.112087    8336 addons.go:492] enable addons completed in 6.974ms: enabled=[]
	I0219 04:12:33.114446    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:12:33.114954    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:12:33.116540    8336 cert_rotation.go:137] Starting client certificate rotation controller
	I0219 04:12:33.117197    8336 round_trippers.go:463] GET https://172.28.244.121:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0219 04:12:33.117197    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:33.117197    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:33.117197    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:33.130181    8336 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0219 04:12:33.130181    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:33.130181    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:33.130181    8336 round_trippers.go:580]     Content-Length: 292
	I0219 04:12:33.130181    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:33 GMT
	I0219 04:12:33.130181    8336 round_trippers.go:580]     Audit-Id: 9f9055b3-e8ad-483f-97e1-fc64f6f6c8fd
	I0219 04:12:33.130181    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:33.130181    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:33.130181    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:33.131457    8336 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"1174","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0219 04:12:33.131741    8336 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-657900" context rescaled to 1 replicas
	I0219 04:12:33.131813    8336 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.244.121 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:12:33.135206    8336 out.go:177] * Verifying Kubernetes components...
	I0219 04:12:33.147584    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:12:33.273423    8336 command_runner.go:130] > apiVersion: v1
	I0219 04:12:33.273497    8336 command_runner.go:130] > data:
	I0219 04:12:33.273497    8336 command_runner.go:130] >   Corefile: |
	I0219 04:12:33.273497    8336 command_runner.go:130] >     .:53 {
	I0219 04:12:33.273548    8336 command_runner.go:130] >         log
	I0219 04:12:33.273548    8336 command_runner.go:130] >         errors
	I0219 04:12:33.273548    8336 command_runner.go:130] >         health {
	I0219 04:12:33.273578    8336 command_runner.go:130] >            lameduck 5s
	I0219 04:12:33.273578    8336 command_runner.go:130] >         }
	I0219 04:12:33.273578    8336 command_runner.go:130] >         ready
	I0219 04:12:33.273578    8336 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0219 04:12:33.273578    8336 command_runner.go:130] >            pods insecure
	I0219 04:12:33.273627    8336 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0219 04:12:33.273656    8336 command_runner.go:130] >            ttl 30
	I0219 04:12:33.273656    8336 command_runner.go:130] >         }
	I0219 04:12:33.273656    8336 command_runner.go:130] >         prometheus :9153
	I0219 04:12:33.273684    8336 command_runner.go:130] >         hosts {
	I0219 04:12:33.273684    8336 command_runner.go:130] >            172.28.240.1 host.minikube.internal
	I0219 04:12:33.273684    8336 command_runner.go:130] >            fallthrough
	I0219 04:12:33.273684    8336 command_runner.go:130] >         }
	I0219 04:12:33.273684    8336 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0219 04:12:33.273684    8336 command_runner.go:130] >            max_concurrent 1000
	I0219 04:12:33.273684    8336 command_runner.go:130] >         }
	I0219 04:12:33.273684    8336 command_runner.go:130] >         cache 30
	I0219 04:12:33.273684    8336 command_runner.go:130] >         loop
	I0219 04:12:33.273684    8336 command_runner.go:130] >         reload
	I0219 04:12:33.273684    8336 command_runner.go:130] >         loadbalance
	I0219 04:12:33.273684    8336 command_runner.go:130] >     }
	I0219 04:12:33.273684    8336 command_runner.go:130] > kind: ConfigMap
	I0219 04:12:33.273684    8336 command_runner.go:130] > metadata:
	I0219 04:12:33.273684    8336 command_runner.go:130] >   creationTimestamp: "2023-02-19T04:00:19Z"
	I0219 04:12:33.273684    8336 command_runner.go:130] >   name: coredns
	I0219 04:12:33.273684    8336 command_runner.go:130] >   namespace: kube-system
	I0219 04:12:33.273684    8336 command_runner.go:130] >   resourceVersion: "366"
	I0219 04:12:33.273684    8336 command_runner.go:130] >   uid: 25821aee-fb16-415b-ac4e-9df69cd5c6ad
	I0219 04:12:33.273684    8336 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0219 04:12:33.273684    8336 node_ready.go:35] waiting up to 6m0s for node "multinode-657900" to be "Ready" ...
	I0219 04:12:33.279314    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:33.279343    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:33.279343    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:33.279343    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:33.282188    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:33.282188    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:33.282188    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:33.282188    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:33.282188    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:33.282188    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:33 GMT
	I0219 04:12:33.282188    8336 round_trippers.go:580]     Audit-Id: e1b438c9-c6a3-4fa1-95de-ad4d9467e4e0
	I0219 04:12:33.282188    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:33.282188    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:33.785209    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:33.785209    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:33.785209    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:33.785209    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:33.795801    8336 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0219 04:12:33.795801    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:33.795801    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:33.795801    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:33.796103    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:33 GMT
	I0219 04:12:33.796103    8336 round_trippers.go:580]     Audit-Id: c3f2d54f-42b5-48d3-b695-f9c249fddb7e
	I0219 04:12:33.796103    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:33.796103    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:33.796350    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:34.288964    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:34.289065    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:34.289065    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:34.289119    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:34.293419    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:34.293419    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:34.293419    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:34.293419    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:34.293419    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:34.294444    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:34 GMT
	I0219 04:12:34.294444    8336 round_trippers.go:580]     Audit-Id: 59f7ad21-7688-46d2-8d8d-167486861bb0
	I0219 04:12:34.294500    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:34.294569    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:34.795438    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:34.795621    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:34.795621    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:34.795621    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:34.798954    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:34.799507    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:34.799507    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:34 GMT
	I0219 04:12:34.799507    8336 round_trippers.go:580]     Audit-Id: 3d62e77e-236a-40bd-831b-063eada28320
	I0219 04:12:34.799507    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:34.799507    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:34.799604    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:34.799604    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:34.799709    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:35.286732    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:35.286732    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:35.286732    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:35.286847    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:35.291595    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:35.291595    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:35.291794    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:35.291794    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:35.291794    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:35.291794    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:35 GMT
	I0219 04:12:35.291794    8336 round_trippers.go:580]     Audit-Id: 77288ca2-1616-47e3-b641-1e7da327fe3b
	I0219 04:12:35.291794    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:35.292153    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:35.292801    8336 node_ready.go:58] node "multinode-657900" has status "Ready":"False"
	I0219 04:12:35.785413    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:35.785413    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:35.785413    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:35.785413    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:35.788980    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:35.788980    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:35.789710    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:35.789710    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:35 GMT
	I0219 04:12:35.789710    8336 round_trippers.go:580]     Audit-Id: 92088a40-2729-4306-baa5-95dcd741878f
	I0219 04:12:35.789710    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:35.789710    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:35.789779    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:35.789779    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:36.287428    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:36.287428    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:36.287428    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:36.287529    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:36.291870    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:36.292383    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:36.292383    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:36 GMT
	I0219 04:12:36.292383    8336 round_trippers.go:580]     Audit-Id: 7d95f049-bd53-471c-b105-12989ff22cff
	I0219 04:12:36.292383    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:36.292383    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:36.292383    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:36.292383    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:36.292749    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:36.791744    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:36.791744    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:36.791744    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:36.791997    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:36.795256    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:36.796021    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:36.796021    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:36.796021    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:36.796021    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:36.796021    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:36.796021    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:36 GMT
	I0219 04:12:36.796021    8336 round_trippers.go:580]     Audit-Id: abb18c04-4893-4ca2-be2d-3d7c77baa7ab
	I0219 04:12:36.796381    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:37.293133    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:37.293133    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:37.293195    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:37.293195    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:37.297136    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:37.297136    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:37.297136    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:37.297136    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:37.297136    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:37 GMT
	I0219 04:12:37.297136    8336 round_trippers.go:580]     Audit-Id: 989eed17-a2e5-4903-9dbe-792470dcd98b
	I0219 04:12:37.297136    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:37.297136    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:37.297136    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:37.297904    8336 node_ready.go:58] node "multinode-657900" has status "Ready":"False"
	I0219 04:12:37.794097    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:37.794097    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:37.794097    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:37.794097    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:37.803510    8336 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0219 04:12:37.803510    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:37.803510    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:37.803510    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:37.803510    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:37.803510    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:37.803510    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:37 GMT
	I0219 04:12:37.803510    8336 round_trippers.go:580]     Audit-Id: 4af34bfa-c26a-43e6-987a-60b44cbf3b8e
	I0219 04:12:37.803510    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:38.284851    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:38.284851    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:38.284923    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:38.284923    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:38.289866    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:38.289866    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:38.289866    8336 round_trippers.go:580]     Audit-Id: ba9b1432-11eb-4f78-adb9-b18d0aa0e923
	I0219 04:12:38.289866    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:38.289866    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:38.289969    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:38.289969    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:38.289969    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:38 GMT
	I0219 04:12:38.290173    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:38.785771    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:38.785771    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:38.785771    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:38.785771    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:38.788455    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:38.788455    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:38.788455    8336 round_trippers.go:580]     Audit-Id: 08fcb1ca-49a8-4bf1-9a5e-376b57334b4b
	I0219 04:12:38.788455    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:38.789348    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:38.789348    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:38.789348    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:38.789348    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:38 GMT
	I0219 04:12:38.789423    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1128","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5521 chars]
	I0219 04:12:39.294914    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:39.294914    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:39.294987    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:39.294987    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:39.299298    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:39.299437    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:39.299437    8336 round_trippers.go:580]     Audit-Id: 80dd840a-ae84-4df8-b2d8-a2201d03d9b8
	I0219 04:12:39.299437    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:39.299437    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:39.299437    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:39.299556    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:39.299556    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:39 GMT
	I0219 04:12:39.299794    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:39.300299    8336 node_ready.go:49] node "multinode-657900" has status "Ready":"True"
	I0219 04:12:39.300299    8336 node_ready.go:38] duration metric: took 6.0266351s waiting for node "multinode-657900" to be "Ready" ...
	I0219 04:12:39.300383    8336 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:12:39.300502    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:12:39.300502    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:39.300568    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:39.300568    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:39.305982    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:12:39.305982    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:39.305982    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:39.305982    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:39 GMT
	I0219 04:12:39.305982    8336 round_trippers.go:580]     Audit-Id: 5295d40b-87ca-4215-905f-de7916b279b4
	I0219 04:12:39.305982    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:39.305982    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:39.306993    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:39.321130    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1232"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82530 chars]
	I0219 04:12:39.325715    8336 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:39.325819    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:39.325819    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:39.325819    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:39.325819    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:39.337462    8336 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0219 04:12:39.337462    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:39.337462    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:39.337462    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:39.337462    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:39 GMT
	I0219 04:12:39.337462    8336 round_trippers.go:580]     Audit-Id: b1e2edf4-ea9c-4b0d-8b75-88775462bea0
	I0219 04:12:39.337904    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:39.337904    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:39.338910    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:39.339776    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:39.339776    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:39.339776    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:39.339776    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:39.343410    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:39.343410    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:39.343410    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:39.343410    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:39.343410    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:39.343410    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:39.343410    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:39 GMT
	I0219 04:12:39.343410    8336 round_trippers.go:580]     Audit-Id: beb5d2a6-d49d-4c11-9645-e0242f36689f
	I0219 04:12:39.343410    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:39.856171    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:39.856232    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:39.856232    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:39.856232    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:39.859613    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:39.859613    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:39.859913    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:39.859913    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:39.859913    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:39 GMT
	I0219 04:12:39.859913    8336 round_trippers.go:580]     Audit-Id: f74bda02-abbc-413f-9e00-db3efc91920d
	I0219 04:12:39.859996    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:39.859996    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:39.860370    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:39.861176    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:39.861234    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:39.861234    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:39.861234    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:39.863658    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:39.864654    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:39.864654    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:39.864654    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:39.864654    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:39 GMT
	I0219 04:12:39.864654    8336 round_trippers.go:580]     Audit-Id: 123fadd7-856c-457c-b39c-d11fa9641c27
	I0219 04:12:39.864725    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:39.864725    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:39.865092    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:40.355897    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:40.355897    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:40.355897    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:40.355897    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:40.363768    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:12:40.364159    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:40.364214    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:40.364214    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:40 GMT
	I0219 04:12:40.364214    8336 round_trippers.go:580]     Audit-Id: 6ae9b2a0-557f-4185-a0dc-e21385b18a87
	I0219 04:12:40.364214    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:40.364214    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:40.364265    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:40.364757    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:40.366063    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:40.366063    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:40.366137    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:40.366137    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:40.369335    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:40.369335    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:40.369335    8336 round_trippers.go:580]     Audit-Id: 67a95d95-7072-4a73-be23-553e97f67ebd
	I0219 04:12:40.369335    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:40.369335    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:40.369335    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:40.369335    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:40.369335    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:40 GMT
	I0219 04:12:40.370209    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:40.847453    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:40.847453    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:40.847453    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:40.847453    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:40.851587    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:40.851587    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:40.851587    8336 round_trippers.go:580]     Audit-Id: c517073a-cdb5-4cba-b40d-74654a311600
	I0219 04:12:40.851587    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:40.851587    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:40.851587    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:40.851587    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:40.851587    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:40 GMT
	I0219 04:12:40.853252    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:40.854464    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:40.854514    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:40.854555    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:40.854555    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:40.856930    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:40.856930    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:40.856930    8336 round_trippers.go:580]     Audit-Id: 7793afdb-1b43-469c-8787-c0f430ee05ba
	I0219 04:12:40.856930    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:40.856930    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:40.856930    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:40.856930    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:40.856930    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:40 GMT
	I0219 04:12:40.858051    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:41.356239    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:41.356239    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:41.356321    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:41.356321    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:41.359277    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:41.359277    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:41.359277    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:41.359277    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:41 GMT
	I0219 04:12:41.359277    8336 round_trippers.go:580]     Audit-Id: 8011f5d5-e3ba-4e0b-a380-76845da43b70
	I0219 04:12:41.359277    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:41.359277    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:41.359277    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:41.360377    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:41.361239    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:41.361352    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:41.361352    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:41.361352    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:41.364530    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:41.364530    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:41.364530    8336 round_trippers.go:580]     Audit-Id: 58d2330b-beb9-4d4d-8964-988fb1edc79c
	I0219 04:12:41.364530    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:41.364530    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:41.364530    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:41.365190    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:41.365190    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:41 GMT
	I0219 04:12:41.365381    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:41.365847    8336 pod_ready.go:102] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"False"
	I0219 04:12:41.855597    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:41.855676    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:41.855676    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:41.855741    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:41.859369    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:41.859369    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:41.860013    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:41 GMT
	I0219 04:12:41.860013    8336 round_trippers.go:580]     Audit-Id: 16254448-41e7-494f-b071-d4e66c4d85f9
	I0219 04:12:41.860013    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:41.860013    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:41.860013    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:41.860013    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:41.860313    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:41.860566    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:41.861114    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:41.861114    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:41.861114    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:41.866702    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:12:41.866702    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:41.866702    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:41.866702    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:41.866702    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:41.866702    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:41 GMT
	I0219 04:12:41.866702    8336 round_trippers.go:580]     Audit-Id: 63385a90-04d0-416a-8b0d-ee8e3f512f08
	I0219 04:12:41.866702    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:41.866702    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:42.359496    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:42.359496    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:42.359496    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:42.359496    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:42.364108    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:42.364108    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:42.364108    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:42.364108    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:42.364355    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:42.364355    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:42.364355    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:42 GMT
	I0219 04:12:42.364355    8336 round_trippers.go:580]     Audit-Id: 8eaf909b-4b1e-4796-9632-8204ff5e9d72
	I0219 04:12:42.364700    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:42.365536    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:42.365536    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:42.365536    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:42.365536    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:42.368814    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:42.368814    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:42.368814    8336 round_trippers.go:580]     Audit-Id: b4a575f5-d362-4b27-8cb3-81ee771d3eef
	I0219 04:12:42.368949    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:42.368949    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:42.368949    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:42.369028    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:42.369028    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:42 GMT
	I0219 04:12:42.369203    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:42.845054    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:42.845142    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:42.845142    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:42.845142    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:42.849159    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:42.849867    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:42.849867    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:42.849947    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:42 GMT
	I0219 04:12:42.849947    8336 round_trippers.go:580]     Audit-Id: 398d8e8a-9c8b-4c1f-86fd-4354bc48c575
	I0219 04:12:42.849947    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:42.849947    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:42.849947    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:42.850598    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:42.850908    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:42.850908    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:42.850908    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:42.850908    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:42.854552    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:42.854652    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:42.854652    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:42.854652    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:42.854652    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:42 GMT
	I0219 04:12:42.854652    8336 round_trippers.go:580]     Audit-Id: 4d52898b-e490-4e82-bb83-a6a93e209f27
	I0219 04:12:42.854652    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:42.854652    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:42.854815    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:43.346014    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:43.346014    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:43.346014    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:43.346014    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:43.349661    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:43.350609    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:43.350634    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:43.350634    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:43.350634    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:43.350634    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:43 GMT
	I0219 04:12:43.350634    8336 round_trippers.go:580]     Audit-Id: f5c3112d-bb44-46fb-bf93-ea60e0aa20dd
	I0219 04:12:43.350634    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:43.351173    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:43.352045    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:43.352136    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:43.352136    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:43.352136    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:43.354438    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:43.355464    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:43.355464    8336 round_trippers.go:580]     Audit-Id: fe8b482e-8588-4409-ab06-e9d308dd7f10
	I0219 04:12:43.355509    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:43.355509    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:43.355509    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:43.355509    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:43.355509    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:43 GMT
	I0219 04:12:43.355509    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:43.845162    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:43.845221    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:43.845245    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:43.845245    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:43.848679    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:43.848679    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:43.848679    8336 round_trippers.go:580]     Audit-Id: 33c662bb-6a65-4c34-bdfa-02b3f06ec11f
	I0219 04:12:43.848679    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:43.848679    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:43.848679    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:43.848679    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:43.848679    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:43 GMT
	I0219 04:12:43.849919    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:43.850282    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:43.850282    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:43.850282    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:43.850282    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:43.853102    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:43.853102    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:43.853102    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:43.853102    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:43.853102    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:43 GMT
	I0219 04:12:43.853102    8336 round_trippers.go:580]     Audit-Id: 679d6a45-791d-40d3-8e58-113b6b54544f
	I0219 04:12:43.853102    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:43.853102    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:43.853102    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:43.854103    8336 pod_ready.go:102] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"False"
	I0219 04:12:44.350281    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:44.350330    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:44.350382    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:44.350431    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:44.354815    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:44.354815    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:44.355185    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:44.355185    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:44.355230    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:44.355230    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:44 GMT
	I0219 04:12:44.355230    8336 round_trippers.go:580]     Audit-Id: 46651be5-7f15-4e35-914c-c800e4b7e249
	I0219 04:12:44.355292    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:44.355676    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:44.356666    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:44.356764    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:44.356764    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:44.356764    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:44.363330    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:12:44.363330    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:44.363330    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:44 GMT
	I0219 04:12:44.363330    8336 round_trippers.go:580]     Audit-Id: f7f9f045-d705-474c-9bf6-9d719b576819
	I0219 04:12:44.363330    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:44.363330    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:44.363330    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:44.363330    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:44.363330    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:44.857988    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:44.857988    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:44.857988    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:44.857988    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:44.863118    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:12:44.863118    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:44.863118    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:44.863118    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:44 GMT
	I0219 04:12:44.863118    8336 round_trippers.go:580]     Audit-Id: b7c43600-c2ee-4c80-a460-bc31c1654051
	I0219 04:12:44.863199    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:44.863199    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:44.863235    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:44.863347    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:44.864182    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:44.864182    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:44.864246    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:44.864276    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:44.867266    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:44.867492    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:44.867492    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:44.867629    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:44 GMT
	I0219 04:12:44.867667    8336 round_trippers.go:580]     Audit-Id: 5329aa7e-0f25-4a16-9b66-9b694d581d94
	I0219 04:12:44.867667    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:44.867667    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:44.867667    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:44.867667    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:45.350950    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:45.350950    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.350950    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.350950    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.355878    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:45.355878    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.355878    8336 round_trippers.go:580]     Audit-Id: 2dff8399-839a-4102-8497-a3a1305387e3
	I0219 04:12:45.355941    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.355941    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.355941    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.355941    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.355941    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.356008    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1148","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0219 04:12:45.356708    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:45.356708    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.356708    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.356708    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.359270    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:45.360152    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.360201    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.360201    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.360201    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.360201    8336 round_trippers.go:580]     Audit-Id: 62f9c0e7-b321-4712-a348-03bee1b8d360
	I0219 04:12:45.360201    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.360201    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.360201    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:45.857992    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:12:45.858068    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.858068    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.858068    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.901293    8336 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0219 04:12:45.901792    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.901792    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.901792    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.901792    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.901859    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.901859    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.901905    8336 round_trippers.go:580]     Audit-Id: 9949b386-57af-4d07-9293-b5c8b380c96f
	I0219 04:12:45.901905    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1257","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0219 04:12:45.902854    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:45.902917    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.902917    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.902917    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.921772    8336 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0219 04:12:45.921772    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.921772    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.921772    8336 round_trippers.go:580]     Audit-Id: fc70ad10-49bf-4934-92bc-b2d08f6a67ad
	I0219 04:12:45.921772    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.921772    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.921772    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.922098    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.922755    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:45.923406    8336 pod_ready.go:92] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:45.923406    8336 pod_ready.go:81] duration metric: took 6.5977123s waiting for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.923499    8336 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.923653    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-657900
	I0219 04:12:45.923653    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.923653    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.923653    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.939759    8336 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0219 04:12:45.939759    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.939759    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.939759    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.940209    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.940209    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.940209    8336 round_trippers.go:580]     Audit-Id: c27ae01a-d0cd-4c65-96b3-435d69bfaccb
	I0219 04:12:45.940209    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.940504    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-657900","namespace":"kube-system","uid":"e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32","resourceVersion":"1229","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.244.121:2379","kubernetes.io/config.hash":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.mirror":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.seen":"2023-02-19T04:12:18.622144946Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5857 chars]
	I0219 04:12:45.941082    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:45.941082    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.941082    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.941082    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.949724    8336 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0219 04:12:45.950374    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.950374    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.950374    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.950448    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.950448    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.950448    8336 round_trippers.go:580]     Audit-Id: 99378cb5-3af2-4f0e-a25f-04e3273fc396
	I0219 04:12:45.950448    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.950549    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:45.951085    8336 pod_ready.go:92] pod "etcd-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:45.951085    8336 pod_ready.go:81] duration metric: took 27.5864ms waiting for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.951228    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.951311    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-657900
	I0219 04:12:45.951311    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.951311    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.951311    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.967148    8336 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0219 04:12:45.967148    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.967148    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.967148    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.967148    8336 round_trippers.go:580]     Audit-Id: 5ff93adf-410e-4484-a422-22ede50c66aa
	I0219 04:12:45.968112    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.968112    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.968143    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.968272    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-657900","namespace":"kube-system","uid":"e47db067-f2ff-412b-954f-0b6b6cf42f8b","resourceVersion":"1186","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.244.121:8443","kubernetes.io/config.hash":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.mirror":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.seen":"2023-02-19T04:12:18.621131732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7393 chars]
	I0219 04:12:45.969239    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:45.969239    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.969239    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.969321    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.972892    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:45.972892    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.972892    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.972892    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.972892    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.972892    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.972892    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.972892    8336 round_trippers.go:580]     Audit-Id: 97dc8da8-46ed-4bef-bf62-5e06818d9c42
	I0219 04:12:45.972892    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:45.973889    8336 pod_ready.go:92] pod "kube-apiserver-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:45.973889    8336 pod_ready.go:81] duration metric: took 22.661ms waiting for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.973889    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.973889    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-657900
	I0219 04:12:45.973889    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.973889    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.973889    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.976900    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:45.977252    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.977252    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.977252    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.977252    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.977252    8336 round_trippers.go:580]     Audit-Id: 520a1d02-9573-4607-a41b-9929a34daae2
	I0219 04:12:45.977323    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.977323    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.977777    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-657900","namespace":"kube-system","uid":"463b901e-dd04-46fc-91a3-9917b12590ff","resourceVersion":"1192","creationTimestamp":"2023-02-19T04:00:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.mirror":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.seen":"2023-02-19T04:00:19.445306645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7163 chars]
	I0219 04:12:45.977777    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:45.977777    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.977777    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.977777    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.983884    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:12:45.983884    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.983884    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.983884    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.983884    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.983884    8336 round_trippers.go:580]     Audit-Id: e1a71069-bc49-4e3a-8bef-73d32bffacae
	I0219 04:12:45.983884    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.983884    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.984887    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:45.984887    8336 pod_ready.go:92] pod "kube-controller-manager-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:45.984887    8336 pod_ready.go:81] duration metric: took 10.9974ms waiting for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.984887    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.985896    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:12:45.985896    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.985896    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.985896    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.987896    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:12:45.987896    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.987896    8336 round_trippers.go:580]     Audit-Id: 31d45772-f6bd-41ee-873b-d75fd2dc637d
	I0219 04:12:45.987896    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.987896    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.987896    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.987896    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.987896    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.987896    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h9z4","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ff10d29-0b2a-4046-a946-90b1a4d8bcb7","resourceVersion":"541","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0219 04:12:45.988913    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:12:45.988913    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:45.988913    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:45.988913    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:45.990892    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:12:45.991934    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:45.991934    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:45.991934    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:45.991934    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:45.991934    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:45 GMT
	I0219 04:12:45.991934    8336 round_trippers.go:580]     Audit-Id: 0a3a4a21-08ab-4efb-b1bb-dd13cb415315
	I0219 04:12:45.991934    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:45.991934    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9","resourceVersion":"946","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
	I0219 04:12:45.991934    8336 pod_ready.go:92] pod "kube-proxy-8h9z4" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:45.991934    8336 pod_ready.go:81] duration metric: took 7.0477ms waiting for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:45.991934    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:46.062197    8336 request.go:622] Waited for 69.9396ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:12:46.062197    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:12:46.062197    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:46.062197    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:46.062197    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:46.069433    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:12:46.070207    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:46.070207    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:46 GMT
	I0219 04:12:46.070437    8336 round_trippers.go:580]     Audit-Id: c5737009-62fb-4882-8bb8-2436263d8b27
	I0219 04:12:46.070437    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:46.070437    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:46.070437    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:46.070437    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:46.070620    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kcm8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa","resourceVersion":"1198","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0219 04:12:46.266570    8336 request.go:622] Waited for 195.2309ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:46.267048    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:46.267048    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:46.267048    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:46.267182    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:46.270529    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:46.270529    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:46.270529    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:46 GMT
	I0219 04:12:46.270529    8336 round_trippers.go:580]     Audit-Id: 3f28a3e5-b47d-47ad-a7fd-9c09a1ed145f
	I0219 04:12:46.271242    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:46.271242    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:46.271242    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:46.271242    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:46.271443    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:46.272039    8336 pod_ready.go:92] pod "kube-proxy-kcm8m" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:46.272039    8336 pod_ready.go:81] duration metric: took 280.106ms waiting for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:46.272039    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:46.469717    8336 request.go:622] Waited for 197.1521ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:12:46.469838    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:12:46.469838    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:46.469838    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:46.469913    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:46.473770    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:46.473770    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:46.473770    8336 round_trippers.go:580]     Audit-Id: ea1e2987-1168-402c-ae3d-f62890ee59a3
	I0219 04:12:46.473770    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:46.473770    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:46.473770    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:46.474643    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:46.474643    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:46 GMT
	I0219 04:12:46.474984    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n5vsl","generateName":"kube-proxy-","namespace":"kube-system","uid":"8757301c-e7d4-4784-8e1b-8e1f24aeabcd","resourceVersion":"1090","creationTimestamp":"2023-02-19T04:05:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:05:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5751 chars]
	I0219 04:12:46.672292    8336 request.go:622] Waited for 196.5097ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:12:46.672374    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:12:46.672374    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:46.672374    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:46.672374    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:46.677244    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:46.677379    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:46.677379    8336 round_trippers.go:580]     Audit-Id: 4f8b04ff-c0e6-40a6-b589-f002ac9b62c5
	I0219 04:12:46.677379    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:46.677379    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:46.677451    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:46.677451    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:46.677451    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:46 GMT
	I0219 04:12:46.678000    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"5442a324-219c-450a-bc84-42446fe87d39","resourceVersion":"1103","creationTimestamp":"2023-02-19T04:09:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:09:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4331 chars]
	I0219 04:12:46.678755    8336 pod_ready.go:92] pod "kube-proxy-n5vsl" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:46.678755    8336 pod_ready.go:81] duration metric: took 406.717ms waiting for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:46.678755    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:46.873334    8336 request.go:622] Waited for 194.5798ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:12:46.873541    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:12:46.873646    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:46.873646    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:46.873646    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:46.878188    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:46.878188    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:46.878188    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:46.878188    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:46.878188    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:46.878188    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:46.878188    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:46 GMT
	I0219 04:12:46.878507    8336 round_trippers.go:580]     Audit-Id: 22db6410-cf56-433a-8b46-a9791b00fa07
	I0219 04:12:46.878777    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-657900","namespace":"kube-system","uid":"ba38eff9-ab82-463a-bb6f-8af5e4599f15","resourceVersion":"1223","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.mirror":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.seen":"2023-02-19T04:00:19.445308045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4893 chars]
	I0219 04:12:47.066851    8336 request.go:622] Waited for 187.0646ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:47.067133    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:12:47.067133    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:47.067133    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:47.067133    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:47.070561    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:12:47.070561    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:47.070561    8336 round_trippers.go:580]     Audit-Id: 802a38c4-4e77-4cbb-9467-0cc16ee62381
	I0219 04:12:47.070561    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:47.070561    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:47.070561    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:47.070561    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:47.070561    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:47 GMT
	I0219 04:12:47.071985    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:12:47.072668    8336 pod_ready.go:92] pod "kube-scheduler-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:12:47.072668    8336 pod_ready.go:81] duration metric: took 393.9148ms waiting for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:12:47.072668    8336 pod_ready.go:38] duration metric: took 7.7723115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:12:47.072762    8336 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:12:47.081809    8336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:12:47.099440    8336 command_runner.go:130] > 1827
	I0219 04:12:47.099440    8336 api_server.go:71] duration metric: took 13.967601s to wait for apiserver process to appear ...
	I0219 04:12:47.099440    8336 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:12:47.099440    8336 api_server.go:252] Checking apiserver healthz at https://172.28.244.121:8443/healthz ...
	I0219 04:12:47.107080    8336 api_server.go:278] https://172.28.244.121:8443/healthz returned 200:
	ok
	I0219 04:12:47.107449    8336 round_trippers.go:463] GET https://172.28.244.121:8443/version
	I0219 04:12:47.107449    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:47.107449    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:47.107449    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:47.108769    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:12:47.109160    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:47.109160    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:47.109160    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:47.109160    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:47.109160    8336 round_trippers.go:580]     Content-Length: 263
	I0219 04:12:47.109160    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:47 GMT
	I0219 04:12:47.109160    8336 round_trippers.go:580]     Audit-Id: c8687ceb-b8cd-435a-9706-a0e28a63755f
	I0219 04:12:47.109244    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:47.109244    8336 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0219 04:12:47.109244    8336 api_server.go:140] control plane version: v1.26.1
	I0219 04:12:47.109244    8336 api_server.go:130] duration metric: took 9.8037ms to wait for apiserver health ...
	I0219 04:12:47.109244    8336 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:12:47.271619    8336 request.go:622] Waited for 162.2677ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:12:47.274933    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:12:47.274933    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:47.274933    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:47.274933    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:47.285989    8336 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0219 04:12:47.285989    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:47.285989    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:47 GMT
	I0219 04:12:47.286772    8336 round_trippers.go:580]     Audit-Id: 7dca91b2-0877-4170-bf4b-d50d0d1c6323
	I0219 04:12:47.286772    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:47.286772    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:47.286772    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:47.286772    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:47.288093    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1265"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1257","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82478 chars]
	I0219 04:12:47.291426    8336 system_pods.go:59] 12 kube-system pods found
	I0219 04:12:47.292333    8336 system_pods.go:61] "coredns-787d4945fb-9mvfg" [38bce706-085e-44e0-bf5e-97cbdebb682e] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "etcd-multinode-657900" [e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kindnet-fp2c9" [fabe9c73-4899-458b-b4ed-16d65d69e5d9] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kindnet-lvjng" [df7a9269-516f-4b66-af0f-429b21ee31cc] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kindnet-zvk4x" [de4adab4-766a-4c34-b827-9bedc6468779] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kube-apiserver-multinode-657900" [e47db067-f2ff-412b-954f-0b6b6cf42f8b] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kube-controller-manager-multinode-657900" [463b901e-dd04-46fc-91a3-9917b12590ff] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kube-proxy-8h9z4" [5ff10d29-0b2a-4046-a946-90b1a4d8bcb7] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kube-proxy-kcm8m" [8ce14b4f-6df3-4822-ac2b-06f3417e8eaa] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kube-proxy-n5vsl" [8757301c-e7d4-4784-8e1b-8e1f24aeabcd] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "kube-scheduler-multinode-657900" [ba38eff9-ab82-463a-bb6f-8af5e4599f15] Running
	I0219 04:12:47.292333    8336 system_pods.go:61] "storage-provisioner" [4fcb063a-be6a-41e8-9379-c8f7cf16a165] Running
	I0219 04:12:47.292333    8336 system_pods.go:74] duration metric: took 183.0892ms to wait for pod list to return data ...
	I0219 04:12:47.292478    8336 default_sa.go:34] waiting for default service account to be created ...
	I0219 04:12:47.460272    8336 request.go:622] Waited for 167.4143ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/default/serviceaccounts
	I0219 04:12:47.460487    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/default/serviceaccounts
	I0219 04:12:47.460487    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:47.460487    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:47.460487    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:47.465386    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:12:47.465793    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:47.465793    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:47 GMT
	I0219 04:12:47.465793    8336 round_trippers.go:580]     Audit-Id: cdd3844a-cb81-4c70-9fe4-63f450522ffe
	I0219 04:12:47.465793    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:47.465793    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:47.465793    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:47.465793    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:47.465793    8336 round_trippers.go:580]     Content-Length: 262
	I0219 04:12:47.465793    8336 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1265"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ddbec5b6-816c-4d34-aa55-cd3b12c88d54","resourceVersion":"320","creationTimestamp":"2023-02-19T04:00:32Z"}}]}
	I0219 04:12:47.465793    8336 default_sa.go:45] found service account: "default"
	I0219 04:12:47.465793    8336 default_sa.go:55] duration metric: took 173.3155ms for default service account to be created ...
	I0219 04:12:47.465793    8336 system_pods.go:116] waiting for k8s-apps to be running ...
	I0219 04:12:47.663691    8336 request.go:622] Waited for 196.9266ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:12:47.663761    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:12:47.663761    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:47.663858    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:47.663858    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:47.670206    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:12:47.670206    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:47.670206    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:47 GMT
	I0219 04:12:47.670206    8336 round_trippers.go:580]     Audit-Id: fdd2d92d-8e8f-4bb4-8fde-fbe6552136c7
	I0219 04:12:47.670635    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:47.670635    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:47.670739    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:47.670739    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:47.672340    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1265"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1257","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82478 chars]
	I0219 04:12:47.676300    8336 system_pods.go:86] 12 kube-system pods found
	I0219 04:12:47.676300    8336 system_pods.go:89] "coredns-787d4945fb-9mvfg" [38bce706-085e-44e0-bf5e-97cbdebb682e] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "etcd-multinode-657900" [e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kindnet-fp2c9" [fabe9c73-4899-458b-b4ed-16d65d69e5d9] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kindnet-lvjng" [df7a9269-516f-4b66-af0f-429b21ee31cc] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kindnet-zvk4x" [de4adab4-766a-4c34-b827-9bedc6468779] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kube-apiserver-multinode-657900" [e47db067-f2ff-412b-954f-0b6b6cf42f8b] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kube-controller-manager-multinode-657900" [463b901e-dd04-46fc-91a3-9917b12590ff] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kube-proxy-8h9z4" [5ff10d29-0b2a-4046-a946-90b1a4d8bcb7] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kube-proxy-kcm8m" [8ce14b4f-6df3-4822-ac2b-06f3417e8eaa] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kube-proxy-n5vsl" [8757301c-e7d4-4784-8e1b-8e1f24aeabcd] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "kube-scheduler-multinode-657900" [ba38eff9-ab82-463a-bb6f-8af5e4599f15] Running
	I0219 04:12:47.676300    8336 system_pods.go:89] "storage-provisioner" [4fcb063a-be6a-41e8-9379-c8f7cf16a165] Running
	I0219 04:12:47.676300    8336 system_pods.go:126] duration metric: took 210.508ms to wait for k8s-apps to be running ...
	I0219 04:12:47.676300    8336 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:12:47.684866    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:12:47.711864    8336 system_svc.go:56] duration metric: took 35.5633ms WaitForService to wait for kubelet.
	I0219 04:12:47.711864    8336 kubeadm.go:578] duration metric: took 14.5800263s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:12:47.711864    8336 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:12:47.866637    8336 request.go:622] Waited for 153.5648ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes
	I0219 04:12:47.866780    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes
	I0219 04:12:47.866780    8336 round_trippers.go:469] Request Headers:
	I0219 04:12:47.866780    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:12:47.866871    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:12:47.874113    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:12:47.874113    8336 round_trippers.go:577] Response Headers:
	I0219 04:12:47.874113    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:12:47.874113    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:12:47 GMT
	I0219 04:12:47.874113    8336 round_trippers.go:580]     Audit-Id: d09fe350-f3d5-4def-b4a8-2df22c237fa6
	I0219 04:12:47.874113    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:12:47.874113    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:12:47.874113    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:12:47.874786    8336 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1267"},"items":[{"metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16276 chars]
	I0219 04:12:47.875850    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:12:47.875850    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:12:47.875850    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:12:47.875850    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:12:47.875850    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:12:47.875850    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:12:47.875850    8336 node_conditions.go:105] duration metric: took 163.9865ms to run NodePressure ...
	I0219 04:12:47.875850    8336 start.go:228] waiting for startup goroutines ...
	I0219 04:12:47.875850    8336 start.go:233] waiting for cluster config update ...
	I0219 04:12:47.875850    8336 start.go:242] writing updated cluster config ...
	I0219 04:12:47.888182    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:12:47.888182    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:12:47.894149    8336 out.go:177] * Starting worker node multinode-657900-m02 in cluster multinode-657900
	I0219 04:12:47.897622    8336 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:12:47.897812    8336 cache.go:57] Caching tarball of preloaded images
	I0219 04:12:47.897812    8336 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:12:47.897812    8336 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 04:12:47.898422    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:12:47.900316    8336 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:12:47.900316    8336 start.go:364] acquiring machines lock for multinode-657900-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:12:47.900316    8336 start.go:368] acquired machines lock for "multinode-657900-m02" in 0s
	I0219 04:12:47.900316    8336 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:12:47.900316    8336 fix.go:55] fixHost starting: m02
	I0219 04:12:47.901347    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:12:48.576110    8336 main.go:141] libmachine: [stdout =====>] : Off
	
	I0219 04:12:48.576110    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:48.576110    8336 fix.go:103] recreateIfNeeded on multinode-657900-m02: state=Stopped err=<nil>
	W0219 04:12:48.576110    8336 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:12:48.580456    8336 out.go:177] * Restarting existing hyperv VM for "multinode-657900-m02" ...
	I0219 04:12:48.582050    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-657900-m02
	I0219 04:12:50.203163    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:12:50.203163    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:50.203163    8336 main.go:141] libmachine: Waiting for host to start...
	I0219 04:12:50.203163    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:12:50.900991    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:50.900991    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:50.900991    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:51.906896    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:12:51.907178    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:52.921650    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:12:53.649549    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:53.649809    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:53.649809    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:54.680106    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:12:54.680106    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:55.695333    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:12:56.412281    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:56.412281    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:56.412281    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:12:57.394141    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:12:57.394413    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:58.397416    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:12:59.112823    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:12:59.112823    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:12:59.113209    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:00.097039    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:13:00.097186    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:01.099337    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:01.797968    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:01.798134    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:01.798261    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:02.783459    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:13:02.783828    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:03.788955    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:04.489185    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:04.489592    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:04.489645    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:05.473024    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:13:05.473157    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:06.476524    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:07.209971    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:07.210216    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:07.210216    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:08.229404    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:13:08.229404    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:09.230772    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:09.945639    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:09.945639    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:09.945639    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:10.942666    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:13:10.942666    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:11.947124    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:12.674866    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:12.674909    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:12.675257    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:13.689708    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:13:13.689708    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:14.703018    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:15.477365    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:15.477542    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:15.477542    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:16.512037    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:16.512037    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:16.514429    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:17.225313    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:17.225313    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:17.225394    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:18.302418    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:18.302418    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:18.302779    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:13:18.305807    8336 machine.go:88] provisioning docker machine ...
	I0219 04:13:18.305807    8336 buildroot.go:166] provisioning hostname "multinode-657900-m02"
	I0219 04:13:18.305875    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:19.064569    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:19.064899    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:19.064947    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:20.107712    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:20.108001    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:20.112081    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:13:20.112832    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.48 22 <nil> <nil>}
	I0219 04:13:20.112832    8336 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-657900-m02 && echo "multinode-657900-m02" | sudo tee /etc/hostname
	I0219 04:13:20.282601    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-657900-m02
	
	I0219 04:13:20.282688    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:21.023652    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:21.023652    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:21.023725    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:22.065651    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:22.065906    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:22.069209    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:13:22.069209    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.48 22 <nil> <nil>}
	I0219 04:13:22.069209    8336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-657900-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-657900-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-657900-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:13:22.225970    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:13:22.225970    8336 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:13:22.225970    8336 buildroot.go:174] setting up certificates
	I0219 04:13:22.225970    8336 provision.go:83] configureAuth start
	I0219 04:13:22.225970    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:22.947544    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:22.947544    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:22.947758    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:24.037514    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:24.037876    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:24.037973    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:24.752725    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:24.752804    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:24.752804    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:25.783352    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:25.783352    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:25.783429    8336 provision.go:138] copyHostCerts
	I0219 04:13:25.783659    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0219 04:13:25.784014    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:13:25.784132    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:13:25.784591    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:13:25.785980    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0219 04:13:25.786198    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:13:25.786258    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:13:25.786635    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:13:25.787593    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0219 04:13:25.787860    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:13:25.787953    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:13:25.788149    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:13:25.789482    8336 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-657900-m02 san=[172.28.250.48 172.28.250.48 localhost 127.0.0.1 minikube multinode-657900-m02]
	I0219 04:13:26.021189    8336 provision.go:172] copyRemoteCerts
	I0219 04:13:26.030249    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:13:26.030249    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:26.800499    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:26.800499    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:26.800499    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:27.843782    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:27.843782    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:27.843782    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:13:27.954971    8336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9247291s)
	I0219 04:13:27.954971    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0219 04:13:27.954971    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:13:27.995645    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0219 04:13:27.995645    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0219 04:13:28.038479    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0219 04:13:28.038695    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:13:28.088601    8336 provision.go:86] duration metric: configureAuth took 5.8626502s
	I0219 04:13:28.088601    8336 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:13:28.089423    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:13:28.089817    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:28.796592    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:28.796592    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:28.796592    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:29.886141    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:29.886407    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:29.891315    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:13:29.892050    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.48 22 <nil> <nil>}
	I0219 04:13:29.892219    8336 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:13:30.049803    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:13:30.049803    8336 buildroot.go:70] root file system type: tmpfs
	I0219 04:13:30.049876    8336 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:13:30.049876    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:30.802934    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:30.802934    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:30.802934    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:31.861948    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:31.861948    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:31.868721    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:13:31.869377    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.48 22 <nil> <nil>}
	I0219 04:13:31.869377    8336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.244.121"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:13:32.033520    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.244.121
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:13:32.033675    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:32.763660    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:32.763692    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:32.763772    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:33.856560    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:33.856560    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:33.861858    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:13:33.862530    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.48 22 <nil> <nil>}
	I0219 04:13:33.862530    8336 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:13:35.124234    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:13:35.124234    8336 machine.go:91] provisioned docker machine in 16.8184817s
	I0219 04:13:35.124234    8336 start.go:300] post-start starting for "multinode-657900-m02" (driver="hyperv")
	I0219 04:13:35.124234    8336 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:13:35.135064    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:13:35.135089    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:35.851995    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:35.852269    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:35.852269    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:36.866619    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:36.866922    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:36.867560    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:13:36.978955    8336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8438265s)
	I0219 04:13:36.989433    8336 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:13:36.995283    8336 command_runner.go:130] > NAME=Buildroot
	I0219 04:13:36.995283    8336 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0219 04:13:36.995283    8336 command_runner.go:130] > ID=buildroot
	I0219 04:13:36.995283    8336 command_runner.go:130] > VERSION_ID=2021.02.12
	I0219 04:13:36.995283    8336 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0219 04:13:36.995513    8336 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:13:36.995556    8336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:13:36.995590    8336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:13:36.996809    8336 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:13:36.996809    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /etc/ssl/certs/101482.pem
	I0219 04:13:37.005898    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:13:37.020994    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:13:37.060849    8336 start.go:303] post-start completed in 1.93657s
	I0219 04:13:37.060879    8336 fix.go:57] fixHost completed within 49.1607245s
	I0219 04:13:37.060954    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:37.752381    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:37.752569    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:37.752569    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:38.779867    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:38.780083    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:38.783961    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:13:38.784696    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.48 22 <nil> <nil>}
	I0219 04:13:38.784696    8336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:13:38.925074    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676780018.918998963
	
	I0219 04:13:38.925074    8336 fix.go:207] guest clock: 1676780018.918998963
	I0219 04:13:38.925195    8336 fix.go:220] Guest: 2023-02-19 04:13:38.918998963 +0000 GMT Remote: 2023-02-19 04:13:37.0608792 +0000 GMT m=+144.987501401 (delta=1.858119763s)
	I0219 04:13:38.925195    8336 fix.go:191] guest clock delta is within tolerance: 1.858119763s
	I0219 04:13:38.925195    8336 start.go:83] releasing machines lock for "multinode-657900-m02", held for 51.0250466s
	I0219 04:13:38.925379    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:39.631303    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:39.631459    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:39.631459    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:40.717355    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:40.717423    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:40.720924    8336 out.go:177] * Found network options:
	I0219 04:13:40.723578    8336 out.go:177]   - NO_PROXY=172.28.244.121
	W0219 04:13:40.726047    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	I0219 04:13:40.728846    8336 out.go:177]   - no_proxy=172.28.244.121
	W0219 04:13:40.730829    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	W0219 04:13:40.734567    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	I0219 04:13:40.736581    8336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:13:40.736642    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:40.744534    8336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0219 04:13:40.744534    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:13:41.496572    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:41.496572    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:41.496572    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:41.503650    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:41.503650    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:41.503650    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:42.604891    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:42.605018    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:42.605018    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:13:42.624984    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.48
	
	I0219 04:13:42.624984    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:42.624984    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.48 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:13:42.757303    8336 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0219 04:13:42.757392    8336 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.0208176s)
	I0219 04:13:42.757492    8336 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0219 04:13:42.757606    8336 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (2.0130589s)
	W0219 04:13:42.757606    8336 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:13:42.768315    8336 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:13:42.774456    8336 command_runner.go:130] > /usr/bin/cri-dockerd
	I0219 04:13:42.785409    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:13:42.802913    8336 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:13:42.843252    8336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:13:42.873137    8336 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0219 04:13:42.873137    8336 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:13:42.873137    8336 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:13:42.881783    8336 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0219 04:13:42.921238    8336 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0219 04:13:42.921238    8336 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0219 04:13:42.921238    8336 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:13:42.921238    8336 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0219 04:13:42.921841    8336 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0219 04:13:42.921841    8336 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:13:42.921841    8336 start.go:485] detecting cgroup driver to use...
	I0219 04:13:42.921841    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:13:42.954621    8336 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:13:42.954621    8336 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:13:42.965096    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:13:42.995196    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:13:43.012281    8336 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:13:43.023746    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:13:43.052407    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:13:43.080635    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:13:43.110111    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:13:43.137589    8336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:13:43.166219    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:13:43.193955    8336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:13:43.211065    8336 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0219 04:13:43.220617    8336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:13:43.258251    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:13:43.437643    8336 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:13:43.466922    8336 start.go:485] detecting cgroup driver to use...
	I0219 04:13:43.477054    8336 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:13:43.496074    8336 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0219 04:13:43.496074    8336 command_runner.go:130] > [Unit]
	I0219 04:13:43.496074    8336 command_runner.go:130] > Description=Docker Application Container Engine
	I0219 04:13:43.496074    8336 command_runner.go:130] > Documentation=https://docs.docker.com
	I0219 04:13:43.496074    8336 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0219 04:13:43.496074    8336 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0219 04:13:43.496074    8336 command_runner.go:130] > StartLimitBurst=3
	I0219 04:13:43.496074    8336 command_runner.go:130] > StartLimitIntervalSec=60
	I0219 04:13:43.496074    8336 command_runner.go:130] > [Service]
	I0219 04:13:43.496074    8336 command_runner.go:130] > Type=notify
	I0219 04:13:43.496074    8336 command_runner.go:130] > Restart=on-failure
	I0219 04:13:43.496074    8336 command_runner.go:130] > Environment=NO_PROXY=172.28.244.121
	I0219 04:13:43.496074    8336 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0219 04:13:43.496074    8336 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0219 04:13:43.496074    8336 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0219 04:13:43.496074    8336 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0219 04:13:43.496074    8336 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0219 04:13:43.496074    8336 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0219 04:13:43.496074    8336 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0219 04:13:43.496074    8336 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0219 04:13:43.496074    8336 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0219 04:13:43.496074    8336 command_runner.go:130] > ExecStart=
	I0219 04:13:43.496074    8336 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0219 04:13:43.496074    8336 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0219 04:13:43.496074    8336 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0219 04:13:43.496074    8336 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0219 04:13:43.496074    8336 command_runner.go:130] > LimitNOFILE=infinity
	I0219 04:13:43.496074    8336 command_runner.go:130] > LimitNPROC=infinity
	I0219 04:13:43.496074    8336 command_runner.go:130] > LimitCORE=infinity
	I0219 04:13:43.496074    8336 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0219 04:13:43.496074    8336 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0219 04:13:43.496074    8336 command_runner.go:130] > TasksMax=infinity
	I0219 04:13:43.496074    8336 command_runner.go:130] > TimeoutStartSec=0
	I0219 04:13:43.496074    8336 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0219 04:13:43.496074    8336 command_runner.go:130] > Delegate=yes
	I0219 04:13:43.496074    8336 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0219 04:13:43.496074    8336 command_runner.go:130] > KillMode=process
	I0219 04:13:43.496074    8336 command_runner.go:130] > [Install]
	I0219 04:13:43.496634    8336 command_runner.go:130] > WantedBy=multi-user.target
	I0219 04:13:43.506803    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:13:43.535970    8336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:13:43.572018    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:13:43.602582    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:13:43.634761    8336 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:13:43.695103    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:13:43.717664    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:13:43.750574    8336 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:13:43.750680    8336 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:13:43.761656    8336 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:13:43.940580    8336 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:13:44.108877    8336 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:13:44.109048    8336 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:13:44.150002    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:13:44.328491    8336 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:13:45.960993    8336 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6325072s)
	I0219 04:13:45.973134    8336 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:13:46.139471    8336 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:13:46.329664    8336 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:13:46.522620    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:13:46.701498    8336 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:13:46.731535    8336 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:13:46.741952    8336 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:13:46.748563    8336 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0219 04:13:46.749596    8336 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0219 04:13:46.749629    8336 command_runner.go:130] > Device: 16h/22d	Inode: 919         Links: 1
	I0219 04:13:46.749629    8336 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0219 04:13:46.749629    8336 command_runner.go:130] > Access: 2023-02-19 04:13:46.715568856 +0000
	I0219 04:13:46.749629    8336 command_runner.go:130] > Modify: 2023-02-19 04:13:46.715568856 +0000
	I0219 04:13:46.749629    8336 command_runner.go:130] > Change: 2023-02-19 04:13:46.720568224 +0000
	I0219 04:13:46.749629    8336 command_runner.go:130] >  Birth: -
	I0219 04:13:46.749728    8336 start.go:553] Will wait 60s for crictl version
	I0219 04:13:46.758993    8336 ssh_runner.go:195] Run: which crictl
	I0219 04:13:46.764676    8336 command_runner.go:130] > /usr/bin/crictl
	I0219 04:13:46.773383    8336 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:13:46.897420    8336 command_runner.go:130] > Version:  0.1.0
	I0219 04:13:46.897502    8336 command_runner.go:130] > RuntimeName:  docker
	I0219 04:13:46.897525    8336 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0219 04:13:46.897525    8336 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0219 04:13:46.897525    8336 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:13:46.906362    8336 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:13:46.949001    8336 command_runner.go:130] > 20.10.23
	I0219 04:13:46.957531    8336 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:13:46.996994    8336 command_runner.go:130] > 20.10.23
	I0219 04:13:47.005836    8336 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:13:47.008315    8336 out.go:177]   - env NO_PROXY=172.28.244.121
	I0219 04:13:47.011187    8336 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:13:47.016461    8336 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:13:47.016534    8336 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:13:47.016534    8336 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:13:47.016534    8336 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:13:47.019589    8336 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:13:47.019589    8336 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:13:47.029802    8336 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:13:47.036193    8336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:13:47.055523    8336 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900 for IP: 172.28.250.48
	I0219 04:13:47.055523    8336 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:13:47.056125    8336 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:13:47.056436    8336 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:13:47.056436    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0219 04:13:47.057081    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0219 04:13:47.057264    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0219 04:13:47.057725    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0219 04:13:47.058540    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:13:47.058827    8336 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:13:47.059065    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:13:47.059342    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:13:47.059655    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:13:47.059882    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:13:47.060521    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:13:47.060579    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:13:47.060579    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem -> /usr/share/ca-certificates/10148.pem
	I0219 04:13:47.060579    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /usr/share/ca-certificates/101482.pem
	I0219 04:13:47.061752    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:13:47.104618    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:13:47.146900    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:13:47.186682    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:13:47.229178    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:13:47.269779    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:13:47.314480    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:13:47.368597    8336 ssh_runner.go:195] Run: openssl version
	I0219 04:13:47.376332    8336 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0219 04:13:47.385569    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:13:47.415107    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:13:47.421300    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:13:47.422313    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:13:47.430315    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:13:47.442174    8336 command_runner.go:130] > b5213941
	I0219 04:13:47.453560    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:13:47.479295    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:13:47.512770    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:13:47.520148    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:13:47.520148    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:13:47.532621    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:13:47.541126    8336 command_runner.go:130] > 51391683
	I0219 04:13:47.550192    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:13:47.579606    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:13:47.606236    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:13:47.613724    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:13:47.613843    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:13:47.623881    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:13:47.632579    8336 command_runner.go:130] > 3ec20f2e
	I0219 04:13:47.642781    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:13:47.669032    8336 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:13:47.713214    8336 command_runner.go:130] > cgroupfs
	I0219 04:13:47.713451    8336 cni.go:84] Creating CNI manager for ""
	I0219 04:13:47.713451    8336 cni.go:136] 3 nodes found, recommending kindnet
	I0219 04:13:47.713451    8336 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:13:47.713451    8336 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.250.48 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-657900 NodeName:multinode-657900-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.244.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.250.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:13:47.713451    8336 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.250.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-657900-m02"
	  kubeletExtraArgs:
	    node-ip: 172.28.250.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.244.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:13:47.713451    8336 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-657900-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.250.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:13:47.723376    8336 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:13:47.742707    8336 command_runner.go:130] > kubeadm
	I0219 04:13:47.743470    8336 command_runner.go:130] > kubectl
	I0219 04:13:47.743470    8336 command_runner.go:130] > kubelet
	I0219 04:13:47.743531    8336 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:13:47.753571    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0219 04:13:47.770126    8336 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0219 04:13:47.804913    8336 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:13:47.843389    8336 ssh_runner.go:195] Run: grep 172.28.244.121	control-plane.minikube.internal$ /etc/hosts
	I0219 04:13:47.849533    8336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.244.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:13:47.868749    8336 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:13:47.869302    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:13:47.869370    8336 start.go:301] JoinCluster: &{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.244.121 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.250.48 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.246.126 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP:}
	I0219 04:13:47.869548    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0219 04:13:47.869548    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:13:48.600200    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:48.600263    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:48.600263    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:49.639336    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:13:49.639440    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:49.639911    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:13:49.889657    8336 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tkyg86.u49qsjg4oyazuayp --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:13:49.889751    8336 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0": (2.0202104s)
	I0219 04:13:49.889869    8336 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.28.250.48 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0219 04:13:49.889932    8336 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:13:49.900608    8336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-657900-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0219 04:13:49.900608    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:13:50.619380    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:13:50.619380    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:50.619472    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:13:51.681038    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:13:51.681250    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:13:51.681780    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:13:51.871967    8336 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0219 04:13:51.954243    8336 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-fp2c9, kube-system/kube-proxy-8h9z4
	I0219 04:13:54.981274    8336 command_runner.go:130] > node/multinode-657900-m02 cordoned
	I0219 04:13:54.981274    8336 command_runner.go:130] > pod "busybox-6b86dd6d48-brhr9" has DeletionTimestamp older than 1 seconds, skipping
	I0219 04:13:54.981274    8336 command_runner.go:130] > node/multinode-657900-m02 drained
	I0219 04:13:54.981274    8336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-657900-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (5.0806827s)
	I0219 04:13:54.981274    8336 node.go:109] successfully drained node "m02"
	I0219 04:13:54.982415    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:13:54.983103    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:13:54.983941    8336 request.go:1171] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0219 04:13:54.983941    8336 round_trippers.go:463] DELETE https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:13:54.983941    8336 round_trippers.go:469] Request Headers:
	I0219 04:13:54.983941    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:13:54.983941    8336 round_trippers.go:473]     Content-Type: application/json
	I0219 04:13:54.983941    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:13:54.996363    8336 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0219 04:13:54.996363    8336 round_trippers.go:577] Response Headers:
	I0219 04:13:54.996363    8336 round_trippers.go:580]     Content-Length: 171
	I0219 04:13:54.996363    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:13:54 GMT
	I0219 04:13:54.996363    8336 round_trippers.go:580]     Audit-Id: b2360adc-3aea-4328-a88c-11d37fa38ca5
	I0219 04:13:54.996363    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:13:54.997098    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:13:54.997098    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:13:54.997098    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:13:54.997098    8336 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-657900-m02","kind":"nodes","uid":"c9a58c3b-8e81-40c3-a62f-7f6dc40e33e9"}}
	I0219 04:13:54.997206    8336 node.go:125] successfully deleted node "m02"
	I0219 04:13:54.997206    8336 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.28.250.48 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0219 04:13:54.997206    8336 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.28.250.48 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0219 04:13:54.997399    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tkyg86.u49qsjg4oyazuayp --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-657900-m02"
	I0219 04:13:55.344437    8336 command_runner.go:130] ! W0219 04:13:55.336909    1309 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:13:56.101430    8336 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:13:57.888597    8336 command_runner.go:130] > [preflight] Running pre-flight checks
	I0219 04:13:57.888597    8336 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0219 04:13:57.888743    8336 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0219 04:13:57.888743    8336 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:13:57.888743    8336 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:13:57.888743    8336 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0219 04:13:57.888743    8336 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0219 04:13:57.888743    8336 command_runner.go:130] > This node has joined the cluster:
	I0219 04:13:57.888743    8336 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0219 04:13:57.888743    8336 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0219 04:13:57.888875    8336 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0219 04:13:57.888875    8336 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tkyg86.u49qsjg4oyazuayp --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-657900-m02": (2.891485s)
	I0219 04:13:57.888875    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0219 04:13:58.113809    8336 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0219 04:13:58.278038    8336 start.go:303] JoinCluster complete in 10.4087026s
	I0219 04:13:58.278115    8336 cni.go:84] Creating CNI manager for ""
	I0219 04:13:58.278115    8336 cni.go:136] 3 nodes found, recommending kindnet
	I0219 04:13:58.288621    8336 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0219 04:13:58.296031    8336 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0219 04:13:58.296031    8336 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0219 04:13:58.296031    8336 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0219 04:13:58.296031    8336 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0219 04:13:58.296031    8336 command_runner.go:130] > Access: 2023-02-19 04:11:42.350359200 +0000
	I0219 04:13:58.296031    8336 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0219 04:13:58.296031    8336 command_runner.go:130] > Change: 2023-02-19 04:11:32.681000000 +0000
	I0219 04:13:58.296031    8336 command_runner.go:130] >  Birth: -
	I0219 04:13:58.296161    8336 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0219 04:13:58.296256    8336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0219 04:13:58.345375    8336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0219 04:13:58.676499    8336 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:13:58.676499    8336 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:13:58.676499    8336 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0219 04:13:58.676499    8336 command_runner.go:130] > daemonset.apps/kindnet configured
	I0219 04:13:58.678010    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:13:58.678604    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:13:58.679774    8336 round_trippers.go:463] GET https://172.28.244.121:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0219 04:13:58.679828    8336 round_trippers.go:469] Request Headers:
	I0219 04:13:58.679828    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:13:58.679828    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:13:58.682021    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:13:58.682021    8336 round_trippers.go:577] Response Headers:
	I0219 04:13:58.682021    8336 round_trippers.go:580]     Content-Length: 292
	I0219 04:13:58.682021    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:13:58 GMT
	I0219 04:13:58.682021    8336 round_trippers.go:580]     Audit-Id: 505fb85f-47d6-4f43-974a-8de2eb58042c
	I0219 04:13:58.682021    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:13:58.682021    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:13:58.682021    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:13:58.682021    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:13:58.682021    8336 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"1261","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0219 04:13:58.683742    8336 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-657900" context rescaled to 1 replicas
	I0219 04:13:58.683742    8336 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.28.250.48 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0219 04:13:58.688636    8336 out.go:177] * Verifying Kubernetes components...
	I0219 04:13:58.704382    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:13:58.733851    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:13:58.734523    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:13:58.735277    8336 node_ready.go:35] waiting up to 6m0s for node "multinode-657900-m02" to be "Ready" ...
	I0219 04:13:58.735335    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:13:58.735450    8336 round_trippers.go:469] Request Headers:
	I0219 04:13:58.735450    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:13:58.735450    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:13:58.737754    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:13:58.738623    8336 round_trippers.go:577] Response Headers:
	I0219 04:13:58.738715    8336 round_trippers.go:580]     Audit-Id: b6804947-8d7e-4a5b-8314-5e169bc6d3f8
	I0219 04:13:58.738715    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:13:58.738792    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:13:58.738832    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:13:58.738876    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:13:58.738876    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:13:58 GMT
	I0219 04:13:58.739251    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1367","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4475 chars]
	I0219 04:13:59.244528    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:13:59.244587    8336 round_trippers.go:469] Request Headers:
	I0219 04:13:59.244587    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:13:59.244587    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:13:59.247520    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:13:59.247520    8336 round_trippers.go:577] Response Headers:
	I0219 04:13:59.247520    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:13:59 GMT
	I0219 04:13:59.247520    8336 round_trippers.go:580]     Audit-Id: fa51a02c-be2c-4f8b-9a3b-6800575af061
	I0219 04:13:59.247520    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:13:59.247638    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:13:59.247638    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:13:59.247638    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:13:59.247998    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1367","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4475 chars]
	I0219 04:13:59.751593    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:13:59.751593    8336 round_trippers.go:469] Request Headers:
	I0219 04:13:59.751681    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:13:59.751681    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:13:59.754062    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:13:59.754062    8336 round_trippers.go:577] Response Headers:
	I0219 04:13:59.754062    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:13:59.754062    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:13:59 GMT
	I0219 04:13:59.754062    8336 round_trippers.go:580]     Audit-Id: 0bd0a50d-dde8-4d7b-a5d0-22248f238aeb
	I0219 04:13:59.754062    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:13:59.754062    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:13:59.754062    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:13:59.754062    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1367","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4475 chars]
	I0219 04:14:00.244393    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:00.244393    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:00.244393    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:00.244393    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:00.248020    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:00.248020    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:00.248020    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:00 GMT
	I0219 04:14:00.248020    8336 round_trippers.go:580]     Audit-Id: 93f4564e-2edb-44a7-8b5a-9b8f862db41d
	I0219 04:14:00.248020    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:00.248020    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:00.248020    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:00.248427    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:00.248714    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1367","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4475 chars]
	I0219 04:14:00.746512    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:00.746512    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:00.746512    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:00.746512    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:00.749132    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:14:00.749132    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:00.750025    8336 round_trippers.go:580]     Audit-Id: 9c105859-5450-4a75-b7f4-aed3ae37e5ef
	I0219 04:14:00.750025    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:00.750025    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:00.750085    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:00.750085    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:00.750085    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:00 GMT
	I0219 04:14:00.750316    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1367","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4475 chars]
	I0219 04:14:00.750477    8336 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:14:01.247291    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:01.247386    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:01.247386    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:01.247386    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:01.250798    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:01.251760    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:01.251826    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:01.251826    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:01 GMT
	I0219 04:14:01.251826    8336 round_trippers.go:580]     Audit-Id: 9a966077-b717-459b-8cbe-d0868d1cf595
	I0219 04:14:01.251826    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:01.251826    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:01.251826    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:01.251826    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:01.749700    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:01.749700    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:01.749700    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:01.749700    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:01.753453    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:01.753570    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:01.753570    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:01.753570    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:01 GMT
	I0219 04:14:01.753570    8336 round_trippers.go:580]     Audit-Id: e2b75ef0-41b9-4ee4-8129-e402905e73ed
	I0219 04:14:01.753570    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:01.753691    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:01.753691    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:01.753945    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:02.240537    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:02.240537    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:02.240683    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:02.240683    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:02.247165    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:14:02.247165    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:02.247165    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:02.247165    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:02 GMT
	I0219 04:14:02.247165    8336 round_trippers.go:580]     Audit-Id: 5bf06a3b-b3ef-4a75-b70d-8c73aad8b8fb
	I0219 04:14:02.247165    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:02.247165    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:02.247165    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:02.247165    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:02.741185    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:02.741185    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:02.741185    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:02.741185    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:02.744802    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:02.744802    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:02.744802    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:02.744802    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:02.744802    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:02 GMT
	I0219 04:14:02.744802    8336 round_trippers.go:580]     Audit-Id: ba5e70f5-4716-4c80-a755-9b2a8c7b877a
	I0219 04:14:02.744802    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:02.744802    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:02.744802    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:03.250169    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:03.250169    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:03.250169    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:03.250169    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:03.253792    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:03.253792    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:03.253792    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:03.254762    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:03.254762    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:03 GMT
	I0219 04:14:03.254762    8336 round_trippers.go:580]     Audit-Id: 8c8db01a-d15c-460f-96c7-035b2e04c4ab
	I0219 04:14:03.254762    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:03.254814    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:03.255106    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:03.255580    8336 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:14:03.749615    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:03.749685    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:03.749685    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:03.749685    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:03.752449    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:14:03.752449    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:03.752449    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:03 GMT
	I0219 04:14:03.753352    8336 round_trippers.go:580]     Audit-Id: e930b7f0-4f99-4f7d-a9c9-2237a510bf49
	I0219 04:14:03.753352    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:03.753352    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:03.753352    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:03.753352    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:03.753422    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:04.254267    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:04.254348    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:04.254348    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:04.254348    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:04.258028    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:04.258028    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:04.258028    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:04 GMT
	I0219 04:14:04.258028    8336 round_trippers.go:580]     Audit-Id: d5396024-587e-4f21-ad0f-ad156b3afcfc
	I0219 04:14:04.258028    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:04.258028    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:04.258028    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:04.258028    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:04.258028    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:04.746483    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:04.746483    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:04.746483    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:04.746483    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:04.750408    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:04.750408    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:04.750469    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:04.750469    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:04.750469    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:04 GMT
	I0219 04:14:04.750469    8336 round_trippers.go:580]     Audit-Id: 555c1e06-8d01-4631-bef2-6eb11869121c
	I0219 04:14:04.750469    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:04.750469    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:04.750469    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:05.254431    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:05.254504    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:05.254504    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:05.254504    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:05.258676    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:14:05.258676    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:05.259248    8336 round_trippers.go:580]     Audit-Id: f97e869d-f4b1-448b-884e-e16c6bf681bc
	I0219 04:14:05.259248    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:05.259248    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:05.259248    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:05.259248    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:05.259248    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:05 GMT
	I0219 04:14:05.259486    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:05.259958    8336 node_ready.go:58] node "multinode-657900-m02" has status "Ready":"False"
	I0219 04:14:05.752747    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:05.752747    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:05.752747    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:05.752747    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:05.758647    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:14:05.758647    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:05.758647    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:05.758647    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:05 GMT
	I0219 04:14:05.758647    8336 round_trippers.go:580]     Audit-Id: f74d6416-80ac-42eb-ae92-0d3079dd65e4
	I0219 04:14:05.758647    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:05.758647    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:05.758647    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:05.759314    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:06.254394    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:06.254474    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:06.254474    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:06.254566    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:06.258344    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:06.258344    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:06.258344    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:06.258344    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:06 GMT
	I0219 04:14:06.258344    8336 round_trippers.go:580]     Audit-Id: d47a251b-63f2-47e4-a2a3-95484e7dcd15
	I0219 04:14:06.258344    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:06.258344    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:06.258344    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:06.258344    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:06.740691    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:06.740852    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:06.740852    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:06.740852    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:06.744242    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:06.744242    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:06.744859    8336 round_trippers.go:580]     Audit-Id: c82d890c-8fc6-4e95-a2c4-c072fc92a5ab
	I0219 04:14:06.744859    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:06.744859    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:06.744859    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:06.744915    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:06.744915    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:06 GMT
	I0219 04:14:06.745093    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1385","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:14:07.245414    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:07.245494    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.245494    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.245494    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.248770    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:07.248770    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.248770    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.248770    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.248770    8336 round_trippers.go:580]     Audit-Id: 07d76cf0-dbb1-4f4d-8302-3a588bf6495e
	I0219 04:14:07.248770    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.248770    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.248770    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.248770    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1402","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
	I0219 04:14:07.248770    8336 node_ready.go:49] node "multinode-657900-m02" has status "Ready":"True"
	I0219 04:14:07.248770    8336 node_ready.go:38] duration metric: took 8.5134635s waiting for node "multinode-657900-m02" to be "Ready" ...
	I0219 04:14:07.248770    8336 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:14:07.248770    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:14:07.248770    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.248770    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.248770    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.255008    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:14:07.255106    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.255106    8336 round_trippers.go:580]     Audit-Id: 622972b2-da19-40fc-9bdf-f3d1c919ce8a
	I0219 04:14:07.255182    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.255182    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.255182    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.255182    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.255182    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.256525    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1404"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1257","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83332 chars]
	I0219 04:14:07.261210    8336 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.261210    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:14:07.261210    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.261210    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.261210    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.264283    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:07.264283    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.264283    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.264283    8336 round_trippers.go:580]     Audit-Id: 576a1881-a908-4cf8-811b-8b9928ac19e6
	I0219 04:14:07.264283    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.264283    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.264283    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.264283    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.264283    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1257","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0219 04:14:07.265215    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:07.265215    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.265215    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.265215    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.268224    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:07.268224    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.268224    8336 round_trippers.go:580]     Audit-Id: f987e07d-4b8e-4390-89ea-ed205f788e84
	I0219 04:14:07.268224    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.268224    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.268564    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.268564    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.268564    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.268794    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:14:07.269022    8336 pod_ready.go:92] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"True"
	I0219 04:14:07.269022    8336 pod_ready.go:81] duration metric: took 7.8123ms waiting for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.269022    8336 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.269022    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-657900
	I0219 04:14:07.269022    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.269022    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.269022    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.272797    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:07.272797    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.272797    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.272797    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.273561    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.273561    8336 round_trippers.go:580]     Audit-Id: 0a6638c1-0821-4550-aff1-1357c4456647
	I0219 04:14:07.273561    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.273561    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.273721    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-657900","namespace":"kube-system","uid":"e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32","resourceVersion":"1229","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.244.121:2379","kubernetes.io/config.hash":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.mirror":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.seen":"2023-02-19T04:12:18.622144946Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5857 chars]
	I0219 04:14:07.273965    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:07.273965    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.273965    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.273965    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.276624    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:14:07.276624    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.276624    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.276624    8336 round_trippers.go:580]     Audit-Id: 1bac44f2-c548-466b-bb3b-cce81be723c1
	I0219 04:14:07.277588    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.277588    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.277588    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.277622    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.277727    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:14:07.277727    8336 pod_ready.go:92] pod "etcd-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:14:07.278333    8336 pod_ready.go:81] duration metric: took 9.311ms waiting for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.278333    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.278333    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-657900
	I0219 04:14:07.278333    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.278482    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.278482    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.284525    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:14:07.284525    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.284525    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.284525    8336 round_trippers.go:580]     Audit-Id: d51feb70-7c37-433e-8db8-12b55003fb8d
	I0219 04:14:07.284525    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.284525    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.284525    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.284525    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.285096    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-657900","namespace":"kube-system","uid":"e47db067-f2ff-412b-954f-0b6b6cf42f8b","resourceVersion":"1186","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.244.121:8443","kubernetes.io/config.hash":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.mirror":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.seen":"2023-02-19T04:12:18.621131732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7393 chars]
	I0219 04:14:07.285243    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:07.285243    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.285243    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.285243    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.286979    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:14:07.286979    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.286979    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.286979    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.286979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.286979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.286979    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.286979    8336 round_trippers.go:580]     Audit-Id: 20d5a070-bb91-425a-a8a0-382a6dfd156c
	I0219 04:14:07.286979    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:14:07.288475    8336 pod_ready.go:92] pod "kube-apiserver-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:14:07.288475    8336 pod_ready.go:81] duration metric: took 10.1426ms waiting for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.288475    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.288475    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-657900
	I0219 04:14:07.288475    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.288475    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.288475    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.291979    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:07.291979    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.291979    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.291979    8336 round_trippers.go:580]     Audit-Id: e26e4498-b53f-4564-a11a-70cf68029b06
	I0219 04:14:07.291979    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.291979    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.291979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.291979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.292980    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-657900","namespace":"kube-system","uid":"463b901e-dd04-46fc-91a3-9917b12590ff","resourceVersion":"1192","creationTimestamp":"2023-02-19T04:00:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.mirror":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.seen":"2023-02-19T04:00:19.445306645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7163 chars]
	I0219 04:14:07.292980    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:07.292980    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.292980    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.292980    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.295979    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:14:07.295979    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.295979    8336 round_trippers.go:580]     Audit-Id: 824e64db-9944-4f56-8e71-fd8c31027352
	I0219 04:14:07.295979    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.295979    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.295979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.295979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.295979    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.298433    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:14:07.298972    8336 pod_ready.go:92] pod "kube-controller-manager-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:14:07.298972    8336 pod_ready.go:81] duration metric: took 10.4965ms waiting for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.298972    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.447486    8336 request.go:622] Waited for 148.2914ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:14:07.447486    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:14:07.447486    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.447486    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.447486    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.452025    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:14:07.452025    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.452263    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.452263    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.452263    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.452263    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.452263    8336 round_trippers.go:580]     Audit-Id: d04aef3b-f224-49ff-9512-74f6a20f9a75
	I0219 04:14:07.452263    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.453405    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h9z4","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ff10d29-0b2a-4046-a946-90b1a4d8bcb7","resourceVersion":"1392","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0219 04:14:07.650395    8336 request.go:622] Waited for 195.9282ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:07.650638    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:14:07.650704    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.650704    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.650704    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.654049    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:07.654049    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.654049    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.654049    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.654049    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.654049    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.655071    8336 round_trippers.go:580]     Audit-Id: 04fed384-2e38-4f8a-aff4-c220c5426f0f
	I0219 04:14:07.655145    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.655186    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1402","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
	I0219 04:14:07.656000    8336 pod_ready.go:92] pod "kube-proxy-8h9z4" in "kube-system" namespace has status "Ready":"True"
	I0219 04:14:07.656000    8336 pod_ready.go:81] duration metric: took 357.0296ms waiting for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.656070    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:07.851625    8336 request.go:622] Waited for 195.3346ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:14:07.851842    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:14:07.851842    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:07.851842    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:07.851842    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:07.860547    8336 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0219 04:14:07.860547    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:07.860547    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:07.860547    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:07 GMT
	I0219 04:14:07.860547    8336 round_trippers.go:580]     Audit-Id: 84442a43-49f8-4406-b605-c7c618cad32d
	I0219 04:14:07.860547    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:07.860547    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:07.860547    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:07.861106    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kcm8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa","resourceVersion":"1198","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0219 04:14:08.055810    8336 request.go:622] Waited for 193.7697ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:08.055896    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:08.055896    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:08.055896    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:08.055896    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:08.060808    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:08.060808    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:08.060876    8336 round_trippers.go:580]     Audit-Id: 113e912a-f2ce-40a6-bdad-e322e47b4495
	I0219 04:14:08.060876    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:08.060876    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:08.060876    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:08.060876    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:08.060876    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:08 GMT
	I0219 04:14:08.061189    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:14:08.061837    8336 pod_ready.go:92] pod "kube-proxy-kcm8m" in "kube-system" namespace has status "Ready":"True"
	I0219 04:14:08.061837    8336 pod_ready.go:81] duration metric: took 405.769ms waiting for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:08.061948    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:08.257943    8336 request.go:622] Waited for 195.8486ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:14:08.258242    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:14:08.258295    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:08.258295    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:08.258295    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:08.261936    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:08.261936    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:08.261936    8336 round_trippers.go:580]     Audit-Id: 81cf9567-2050-446c-bd87-5ec1514f65af
	I0219 04:14:08.262846    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:08.262846    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:08.262846    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:08.262916    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:08.262916    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:08 GMT
	I0219 04:14:08.263183    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n5vsl","generateName":"kube-proxy-","namespace":"kube-system","uid":"8757301c-e7d4-4784-8e1b-8e1f24aeabcd","resourceVersion":"1304","creationTimestamp":"2023-02-19T04:05:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:05:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5976 chars]
	I0219 04:14:08.448002    8336 request.go:622] Waited for 183.9538ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:14:08.448094    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:14:08.448094    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:08.448262    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:08.448262    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:08.451672    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:14:08.451672    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:08.452484    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:08 GMT
	I0219 04:14:08.452484    8336 round_trippers.go:580]     Audit-Id: 1279b1b5-9f72-42ce-87bc-9cfb175dc052
	I0219 04:14:08.452484    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:08.452484    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:08.452484    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:08.452559    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:08.452712    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"5442a324-219c-450a-bc84-42446fe87d39","resourceVersion":"1317","creationTimestamp":"2023-02-19T04:09:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:09:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 5087 chars]
	I0219 04:14:08.453373    8336 pod_ready.go:97] node "multinode-657900-m03" hosting pod "kube-proxy-n5vsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900-m03" has status "Ready":"Unknown"
	I0219 04:14:08.453373    8336 pod_ready.go:81] duration metric: took 391.4264ms waiting for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	E0219 04:14:08.453373    8336 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-657900-m03" hosting pod "kube-proxy-n5vsl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-657900-m03" has status "Ready":"Unknown"
	I0219 04:14:08.453373    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:08.648880    8336 request.go:622] Waited for 195.3742ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:14:08.649117    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:14:08.649117    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:08.649117    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:08.649117    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:08.651545    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:14:08.651545    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:08.651545    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:08.651545    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:08.651545    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:08.652562    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:08 GMT
	I0219 04:14:08.652562    8336 round_trippers.go:580]     Audit-Id: 1fdc2295-eb41-4200-927d-078d010bd7d1
	I0219 04:14:08.652608    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:08.652803    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-657900","namespace":"kube-system","uid":"ba38eff9-ab82-463a-bb6f-8af5e4599f15","resourceVersion":"1223","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.mirror":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.seen":"2023-02-19T04:00:19.445308045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4893 chars]
	I0219 04:14:08.852172    8336 request.go:622] Waited for 198.7398ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:08.852172    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:14:08.852172    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:08.852172    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:08.852172    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:08.856804    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:14:08.856896    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:08.856896    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:08.856896    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:08.856896    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:08.856969    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:08 GMT
	I0219 04:14:08.856969    8336 round_trippers.go:580]     Audit-Id: e10eb067-b99b-45f7-895c-2a769490fd98
	I0219 04:14:08.856969    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:08.856969    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:14:08.857691    8336 pod_ready.go:92] pod "kube-scheduler-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:14:08.857691    8336 pod_ready.go:81] duration metric: took 404.3192ms waiting for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:14:08.857691    8336 pod_ready.go:38] duration metric: took 1.608926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:14:08.857691    8336 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:14:08.867573    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:14:08.886271    8336 system_svc.go:56] duration metric: took 28.5803ms WaitForService to wait for kubelet.
	I0219 04:14:08.886271    8336 kubeadm.go:578] duration metric: took 10.2025624s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:14:08.886271    8336 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:14:09.056653    8336 request.go:622] Waited for 170.1759ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes
	I0219 04:14:09.056869    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes
	I0219 04:14:09.056869    8336 round_trippers.go:469] Request Headers:
	I0219 04:14:09.056869    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:14:09.056869    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:14:09.064311    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:14:09.064311    8336 round_trippers.go:577] Response Headers:
	I0219 04:14:09.064311    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:14:09.064311    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:14:09.064311    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:14:09 GMT
	I0219 04:14:09.064311    8336 round_trippers.go:580]     Audit-Id: 2b949093-4efd-42ac-8608-da03d0ed29a3
	I0219 04:14:09.064311    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:14:09.064311    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:14:09.064311    8336 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1406"},"items":[{"metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 17138 chars]
	I0219 04:14:09.066388    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:14:09.066388    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:14:09.066388    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:14:09.066388    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:14:09.066388    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:14:09.066388    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:14:09.066388    8336 node_conditions.go:105] duration metric: took 180.1175ms to run NodePressure ...
	I0219 04:14:09.066388    8336 start.go:228] waiting for startup goroutines ...
	I0219 04:14:09.066388    8336 start.go:242] writing updated cluster config ...
	I0219 04:14:09.077651    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:14:09.078228    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:14:09.085542    8336 out.go:177] * Starting worker node multinode-657900-m03 in cluster multinode-657900
	I0219 04:14:09.087498    8336 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:14:09.087498    8336 cache.go:57] Caching tarball of preloaded images
	I0219 04:14:09.088238    8336 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:14:09.088238    8336 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 04:14:09.088238    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:14:09.090941    8336 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:14:09.090941    8336 start.go:364] acquiring machines lock for multinode-657900-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:14:09.090941    8336 start.go:368] acquired machines lock for "multinode-657900-m03" in 0s
	I0219 04:14:09.090941    8336 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:14:09.090941    8336 fix.go:55] fixHost starting: m03
	I0219 04:14:09.090941    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:09.767470    8336 main.go:141] libmachine: [stdout =====>] : Off
	
	I0219 04:14:09.767766    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:09.767766    8336 fix.go:103] recreateIfNeeded on multinode-657900-m03: state=Stopped err=<nil>
	W0219 04:14:09.767766    8336 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:14:09.771229    8336 out.go:177] * Restarting existing hyperv VM for "multinode-657900-m03" ...
	I0219 04:14:09.775710    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-657900-m03
	I0219 04:14:11.395611    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:11.395611    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:11.395701    8336 main.go:141] libmachine: Waiting for host to start...
	I0219 04:14:11.395701    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:12.123967    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:12.123967    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:12.123967    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:13.136521    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:13.136521    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:14.150780    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:14.868472    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:14.868472    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:14.868472    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:15.887081    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:15.887167    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:16.888361    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:17.623237    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:17.623237    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:17.623316    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:18.650586    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:18.650586    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:19.651741    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:20.399964    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:20.399964    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:20.399964    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:21.401459    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:21.401528    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:22.415471    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:23.159567    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:23.159567    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:23.159567    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:24.174810    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:24.174810    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:25.178224    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:25.923985    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:25.923985    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:25.923985    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:26.959108    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:26.959108    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:27.970467    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:28.719975    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:28.719975    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:28.719975    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:29.770984    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:29.771145    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:30.779958    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:31.479282    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:31.479328    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:31.479437    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:32.510222    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:32.510222    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:33.512553    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:34.225296    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:34.225296    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:34.225296    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:35.208604    8336 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:14:35.208777    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:36.212145    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:36.908810    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:36.908810    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:36.908810    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:37.939096    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:37.939273    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:37.941945    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:38.654961    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:38.655087    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:38.655087    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:39.667405    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:39.667405    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:39.667405    8336 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900\config.json ...
	I0219 04:14:39.670068    8336 machine.go:88] provisioning docker machine ...
	I0219 04:14:39.670267    8336 buildroot.go:166] provisioning hostname "multinode-657900-m03"
	I0219 04:14:39.670267    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:40.369976    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:40.370197    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:40.370197    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:41.398462    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:41.398548    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:41.403016    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:14:41.403676    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.14 22 <nil> <nil>}
	I0219 04:14:41.403676    8336 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-657900-m03 && echo "multinode-657900-m03" | sudo tee /etc/hostname
	I0219 04:14:41.569686    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-657900-m03
	
	I0219 04:14:41.569820    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:42.252313    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:42.252569    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:42.252619    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:43.280012    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:43.280012    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:43.283321    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:14:43.284569    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.14 22 <nil> <nil>}
	I0219 04:14:43.284817    8336 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-657900-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-657900-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-657900-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:14:43.440526    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:14:43.440526    8336 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:14:43.440526    8336 buildroot.go:174] setting up certificates
	I0219 04:14:43.440526    8336 provision.go:83] configureAuth start
	I0219 04:14:43.440526    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:44.153882    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:44.153882    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:44.153882    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:45.172293    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:45.172293    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:45.172564    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:45.865055    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:45.865055    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:45.865353    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:46.893385    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:46.893385    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:46.893607    8336 provision.go:138] copyHostCerts
	I0219 04:14:46.893716    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0219 04:14:46.893942    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:14:46.893942    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:14:46.893942    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:14:46.895295    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0219 04:14:46.895575    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:14:46.895652    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:14:46.896770    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:14:46.897506    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0219 04:14:46.897805    8336 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:14:46.897896    8336 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:14:46.898106    8336 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:14:46.899423    8336 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-657900-m03 san=[172.28.250.14 172.28.250.14 localhost 127.0.0.1 minikube multinode-657900-m03]
	I0219 04:14:47.114844    8336 provision.go:172] copyRemoteCerts
	I0219 04:14:47.127139    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:14:47.127139    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:47.843689    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:47.843772    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:47.843830    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:48.851823    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:48.851823    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:48.851823    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m03\id_rsa Username:docker}
	I0219 04:14:48.958596    8336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.831463s)
	I0219 04:14:48.958596    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0219 04:14:48.959348    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:14:48.997606    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0219 04:14:48.998137    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0219 04:14:49.039977    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0219 04:14:49.040027    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:14:49.081514    8336 provision.go:86] duration metric: configureAuth took 5.6410071s
	I0219 04:14:49.081623    8336 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:14:49.081953    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:14:49.081953    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:49.763098    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:49.763210    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:49.763210    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:50.743119    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:50.743291    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:50.748891    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:14:50.749685    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.14 22 <nil> <nil>}
	I0219 04:14:50.749685    8336 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:14:50.889973    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:14:50.890035    8336 buildroot.go:70] root file system type: tmpfs
	I0219 04:14:50.890175    8336 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:14:50.890226    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:51.593274    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:51.593422    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:51.593422    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:52.637638    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:52.637638    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:52.642118    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:14:52.642990    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.14 22 <nil> <nil>}
	I0219 04:14:52.643188    8336 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.244.121"
	Environment="NO_PROXY=172.28.244.121,172.28.250.48"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:14:52.815379    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.244.121
	Environment=NO_PROXY=172.28.244.121,172.28.250.48
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:14:52.815459    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:53.538823    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:53.538823    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:53.538823    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:54.583408    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:54.583408    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:54.588382    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:14:54.588998    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.14 22 <nil> <nil>}
	I0219 04:14:54.588998    8336 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:14:55.830883    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:14:55.830883    8336 machine.go:91] provisioned docker machine in 16.1608679s
	I0219 04:14:55.830883    8336 start.go:300] post-start starting for "multinode-657900-m03" (driver="hyperv")
	I0219 04:14:55.830883    8336 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:14:55.840873    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:14:55.841870    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:56.523617    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:56.523617    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:56.523617    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:57.535894    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:57.535959    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:57.536068    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m03\id_rsa Username:docker}
	I0219 04:14:57.646278    8336 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8043164s)
	I0219 04:14:57.656431    8336 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:14:57.663361    8336 command_runner.go:130] > NAME=Buildroot
	I0219 04:14:57.663502    8336 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0219 04:14:57.663502    8336 command_runner.go:130] > ID=buildroot
	I0219 04:14:57.663502    8336 command_runner.go:130] > VERSION_ID=2021.02.12
	I0219 04:14:57.663502    8336 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0219 04:14:57.663614    8336 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:14:57.663614    8336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:14:57.664089    8336 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:14:57.664968    8336 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:14:57.664968    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /etc/ssl/certs/101482.pem
	I0219 04:14:57.674253    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:14:57.689707    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:14:57.726943    8336 start.go:303] post-start completed in 1.8960668s
	I0219 04:14:57.726943    8336 fix.go:57] fixHost completed within 48.6361629s
	I0219 04:14:57.726943    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:14:58.433384    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:14:58.433384    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:58.433384    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:14:59.481121    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:14:59.481491    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:14:59.486974    8336 main.go:141] libmachine: Using SSH client type: native
	I0219 04:14:59.487711    8336 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.14 22 <nil> <nil>}
	I0219 04:14:59.488286    8336 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:14:59.628755    8336 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676780099.627927100
	
	I0219 04:14:59.628755    8336 fix.go:207] guest clock: 1676780099.627927100
	I0219 04:14:59.628755    8336 fix.go:220] Guest: 2023-02-19 04:14:59.6279271 +0000 GMT Remote: 2023-02-19 04:14:57.7269437 +0000 GMT m=+225.653832101 (delta=1.9009834s)
	I0219 04:14:59.628755    8336 fix.go:191] guest clock delta is within tolerance: 1.9009834s
	I0219 04:14:59.628755    8336 start.go:83] releasing machines lock for "multinode-657900-m03", held for 50.5379812s
	I0219 04:14:59.628755    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:15:00.368099    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:15:00.368099    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:00.368423    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:15:01.367594    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:15:01.367594    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:01.371648    8336 out.go:177] * Found network options:
	I0219 04:15:01.374970    8336 out.go:177]   - NO_PROXY=172.28.244.121,172.28.250.48
	W0219 04:15:01.377709    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	W0219 04:15:01.377773    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	I0219 04:15:01.379903    8336 out.go:177]   - no_proxy=172.28.244.121,172.28.250.48
	W0219 04:15:01.381945    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	W0219 04:15:01.381945    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	W0219 04:15:01.384238    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	W0219 04:15:01.384238    8336 proxy.go:119] fail to check proxy env: Error ip not in block
	I0219 04:15:01.389127    8336 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:15:01.389691    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:15:01.393888    8336 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0219 04:15:01.393888    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:15:02.130821    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:15:02.130821    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:02.130821    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:15:02.131145    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:15:02.131145    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:02.131145    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m03 ).networkadapters[0]).ipaddresses[0]
	I0219 04:15:03.224148    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:15:03.224432    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:03.224755    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m03\id_rsa Username:docker}
	I0219 04:15:03.239357    8336 main.go:141] libmachine: [stdout =====>] : 172.28.250.14
	
	I0219 04:15:03.239639    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:03.240063    8336 sshutil.go:53] new ssh client: &{IP:172.28.250.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m03\id_rsa Username:docker}
	I0219 04:15:05.888551    8336 command_runner.go:130] ! curl: (28) Resolving timed out after 2000 milliseconds
	I0219 04:15:05.888995    8336 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0219 04:15:05.888995    8336 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4998833s)
	I0219 04:15:05.888995    8336 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4951221s)
	W0219 04:15:05.889229    8336 start.go:835] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0219 04:15:05.889333    8336 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	W0219 04:15:05.889560    8336 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0219 04:15:05.889664    8336 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0219 04:15:05.900832    8336 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:15:05.907039    8336 command_runner.go:130] > /usr/bin/cri-dockerd
	I0219 04:15:05.916933    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:15:05.931544    8336 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:15:05.968439    8336 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:15:05.992171    8336 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0219 04:15:05.992261    8336 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:15:05.992261    8336 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:15:05.999601    8336 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:15:06.035049    8336 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0219 04:15:06.035114    8336 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0219 04:15:06.035114    8336 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0219 04:15:06.035166    8336 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0219 04:15:06.035166    8336 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0219 04:15:06.035192    8336 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0219 04:15:06.035192    8336 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0219 04:15:06.035192    8336 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0219 04:15:06.035192    8336 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0219 04:15:06.035192    8336 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:15:06.035192    8336 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:15:06.035192    8336 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:15:06.035192    8336 start.go:485] detecting cgroup driver to use...
	I0219 04:15:06.035192    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:15:06.063930    8336 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:15:06.064246    8336 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0219 04:15:06.074590    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:15:06.100598    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:15:06.115764    8336 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:15:06.126866    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:15:06.150262    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:15:06.174804    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:15:06.201342    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:15:06.225701    8336 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:15:06.251832    8336 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:15:06.275853    8336 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:15:06.291606    8336 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0219 04:15:06.300430    8336 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:15:06.324204    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:15:06.486362    8336 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:15:06.513148    8336 start.go:485] detecting cgroup driver to use...
	I0219 04:15:06.523722    8336 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:15:06.546162    8336 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0219 04:15:06.546162    8336 command_runner.go:130] > [Unit]
	I0219 04:15:06.546162    8336 command_runner.go:130] > Description=Docker Application Container Engine
	I0219 04:15:06.546162    8336 command_runner.go:130] > Documentation=https://docs.docker.com
	I0219 04:15:06.546162    8336 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0219 04:15:06.546162    8336 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0219 04:15:06.546162    8336 command_runner.go:130] > StartLimitBurst=3
	I0219 04:15:06.546162    8336 command_runner.go:130] > StartLimitIntervalSec=60
	I0219 04:15:06.546162    8336 command_runner.go:130] > [Service]
	I0219 04:15:06.546162    8336 command_runner.go:130] > Type=notify
	I0219 04:15:06.546162    8336 command_runner.go:130] > Restart=on-failure
	I0219 04:15:06.546162    8336 command_runner.go:130] > Environment=NO_PROXY=172.28.244.121
	I0219 04:15:06.546162    8336 command_runner.go:130] > Environment=NO_PROXY=172.28.244.121,172.28.250.48
	I0219 04:15:06.546162    8336 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0219 04:15:06.547161    8336 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0219 04:15:06.547161    8336 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0219 04:15:06.547161    8336 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0219 04:15:06.547161    8336 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0219 04:15:06.547161    8336 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0219 04:15:06.547161    8336 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0219 04:15:06.547161    8336 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0219 04:15:06.547161    8336 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0219 04:15:06.547161    8336 command_runner.go:130] > ExecStart=
	I0219 04:15:06.547161    8336 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0219 04:15:06.547161    8336 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0219 04:15:06.547161    8336 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0219 04:15:06.547161    8336 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0219 04:15:06.547161    8336 command_runner.go:130] > LimitNOFILE=infinity
	I0219 04:15:06.547161    8336 command_runner.go:130] > LimitNPROC=infinity
	I0219 04:15:06.547161    8336 command_runner.go:130] > LimitCORE=infinity
	I0219 04:15:06.547161    8336 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0219 04:15:06.547161    8336 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0219 04:15:06.547161    8336 command_runner.go:130] > TasksMax=infinity
	I0219 04:15:06.547161    8336 command_runner.go:130] > TimeoutStartSec=0
	I0219 04:15:06.547161    8336 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0219 04:15:06.547161    8336 command_runner.go:130] > Delegate=yes
	I0219 04:15:06.547161    8336 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0219 04:15:06.547161    8336 command_runner.go:130] > KillMode=process
	I0219 04:15:06.547161    8336 command_runner.go:130] > [Install]
	I0219 04:15:06.547161    8336 command_runner.go:130] > WantedBy=multi-user.target
	I0219 04:15:06.557904    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:15:06.585181    8336 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:15:06.620128    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:15:06.648755    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:15:06.678607    8336 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:15:06.739740    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:15:06.760150    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:15:06.788844    8336 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:15:06.789868    8336 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0219 04:15:06.799866    8336 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:15:06.961335    8336 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:15:07.139063    8336 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:15:07.139115    8336 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:15:07.185046    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:15:07.341467    8336 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:15:08.941137    8336 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5996125s)
	I0219 04:15:08.950857    8336 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:15:09.115064    8336 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:15:09.283952    8336 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:15:09.449568    8336 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:15:09.608019    8336 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:15:09.630953    8336 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:15:09.641059    8336 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:15:09.648948    8336 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0219 04:15:09.649009    8336 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0219 04:15:09.649070    8336 command_runner.go:130] > Device: 16h/22d	Inode: 865         Links: 1
	I0219 04:15:09.649070    8336 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0219 04:15:09.649070    8336 command_runner.go:130] > Access: 2023-02-19 04:15:09.622399539 +0000
	I0219 04:15:09.649127    8336 command_runner.go:130] > Modify: 2023-02-19 04:15:09.622399539 +0000
	I0219 04:15:09.649156    8336 command_runner.go:130] > Change: 2023-02-19 04:15:09.626396782 +0000
	I0219 04:15:09.649156    8336 command_runner.go:130] >  Birth: -
	I0219 04:15:09.649156    8336 start.go:553] Will wait 60s for crictl version
	I0219 04:15:09.657581    8336 ssh_runner.go:195] Run: which crictl
	I0219 04:15:09.664674    8336 command_runner.go:130] > /usr/bin/crictl
	I0219 04:15:09.673531    8336 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:15:09.815857    8336 command_runner.go:130] > Version:  0.1.0
	I0219 04:15:09.815857    8336 command_runner.go:130] > RuntimeName:  docker
	I0219 04:15:09.815857    8336 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0219 04:15:09.815857    8336 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0219 04:15:09.815857    8336 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:15:09.823857    8336 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:15:09.861949    8336 command_runner.go:130] > 20.10.23
	I0219 04:15:09.870494    8336 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:15:09.917813    8336 command_runner.go:130] > 20.10.23
	I0219 04:15:09.921625    8336 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:15:09.924197    8336 out.go:177]   - env NO_PROXY=172.28.244.121
	I0219 04:15:09.927753    8336 out.go:177]   - env NO_PROXY=172.28.244.121,172.28.250.48
	I0219 04:15:09.929747    8336 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:15:09.934018    8336 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:15:09.934018    8336 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:15:09.934018    8336 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:15:09.934018    8336 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:15:09.937128    8336 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:15:09.937128    8336 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:15:09.949434    8336 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:15:09.949434    8336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:15:09.977924    8336 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-657900 for IP: 172.28.250.14
	I0219 04:15:09.977992    8336 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:15:09.978661    8336 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:15:09.979081    8336 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:15:09.979298    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0219 04:15:09.979555    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0219 04:15:09.979687    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0219 04:15:09.979819    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0219 04:15:09.980371    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:15:09.980641    8336 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:15:09.980641    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:15:09.980641    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:15:09.981179    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:15:09.981409    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:15:09.981619    8336 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:15:09.981619    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /usr/share/ca-certificates/101482.pem
	I0219 04:15:09.982276    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:15:09.982376    8336 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem -> /usr/share/ca-certificates/10148.pem
	I0219 04:15:09.982991    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:15:10.029591    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:15:10.074902    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:15:10.121749    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:15:10.158300    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:15:10.199642    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:15:10.239404    8336 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:15:10.290059    8336 ssh_runner.go:195] Run: openssl version
	I0219 04:15:10.299417    8336 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0219 04:15:10.309151    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:15:10.335012    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:15:10.341143    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:15:10.341376    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:15:10.350350    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:15:10.358164    8336 command_runner.go:130] > 3ec20f2e
	I0219 04:15:10.368890    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:15:10.396760    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:15:10.423098    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:15:10.429394    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:15:10.429477    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:15:10.438371    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:15:10.445521    8336 command_runner.go:130] > b5213941
	I0219 04:15:10.454595    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:15:10.483124    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:15:10.508280    8336 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:15:10.515103    8336 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:15:10.515103    8336 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:15:10.524348    8336 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:15:10.532700    8336 command_runner.go:130] > 51391683
	I0219 04:15:10.542797    8336 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:15:10.566443    8336 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:15:10.623102    8336 command_runner.go:130] > cgroupfs
	I0219 04:15:10.623188    8336 cni.go:84] Creating CNI manager for ""
	I0219 04:15:10.623188    8336 cni.go:136] 3 nodes found, recommending kindnet
	I0219 04:15:10.623188    8336 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:15:10.623283    8336 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.250.14 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-657900 NodeName:multinode-657900-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.244.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.250.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:15:10.623605    8336 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.250.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-657900-m03"
	  kubeletExtraArgs:
	    node-ip: 172.28.250.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.244.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:15:10.623605    8336 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-657900-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.250.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:15:10.632856    8336 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:15:10.650850    8336 command_runner.go:130] > kubeadm
	I0219 04:15:10.650850    8336 command_runner.go:130] > kubectl
	I0219 04:15:10.650850    8336 command_runner.go:130] > kubelet
	I0219 04:15:10.650850    8336 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:15:10.658939    8336 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0219 04:15:10.676952    8336 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0219 04:15:10.706239    8336 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:15:10.743220    8336 ssh_runner.go:195] Run: grep 172.28.244.121	control-plane.minikube.internal$ /etc/hosts
	I0219 04:15:10.749326    8336 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.244.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:15:10.767275    8336 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:15:10.768268    8336 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:15:10.768268    8336 start.go:301] JoinCluster: &{Name:multinode-657900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-657900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.244.121 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.250.48 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.250.14 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP:}
	I0219 04:15:10.768268    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0219 04:15:10.768268    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:15:11.486614    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:15:11.486881    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:11.486881    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:15:12.530953    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:15:12.530953    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:12.531227    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:15:12.746612    8336 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yocni8.i9q8ehl2enlygy5l --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:15:12.747011    8336 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0": (1.9787499s)
	I0219 04:15:12.747011    8336 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.28.250.14 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0219 04:15:12.747289    8336 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:15:12.758205    8336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-657900-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0219 04:15:12.758205    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:15:13.477963    8336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:15:13.478117    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:13.478175    8336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:15:14.508693    8336 main.go:141] libmachine: [stdout =====>] : 172.28.244.121
	
	I0219 04:15:14.508693    8336 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:15:14.508693    8336 sshutil.go:53] new ssh client: &{IP:172.28.244.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:15:14.701950    8336 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0219 04:15:14.784277    8336 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zvk4x, kube-system/kube-proxy-n5vsl
	I0219 04:15:14.788598    8336 command_runner.go:130] > node/multinode-657900-m03 cordoned
	I0219 04:15:14.788598    8336 command_runner.go:130] > node/multinode-657900-m03 drained
	I0219 04:15:14.788598    8336 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-657900-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (2.0303991s)
	I0219 04:15:14.789564    8336 node.go:109] successfully drained node "m03"
	I0219 04:15:14.789564    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:15:14.791035    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:15:14.791958    8336 request.go:1171] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0219 04:15:14.792103    8336 round_trippers.go:463] DELETE https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:14.792103    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:14.792103    8336 round_trippers.go:473]     Content-Type: application/json
	I0219 04:15:14.792167    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:14.792167    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:14.808466    8336 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0219 04:15:14.808466    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:14.808466    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:14.808466    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:14.808466    8336 round_trippers.go:580]     Content-Length: 171
	I0219 04:15:14.808466    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:14 GMT
	I0219 04:15:14.808466    8336 round_trippers.go:580]     Audit-Id: ae22f384-b5fa-4334-9c8d-2473ff8e5341
	I0219 04:15:14.808466    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:14.808466    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:14.808466    8336 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-657900-m03","kind":"nodes","uid":"5442a324-219c-450a-bc84-42446fe87d39"}}
	I0219 04:15:14.808466    8336 node.go:125] successfully deleted node "m03"
	I0219 04:15:14.808466    8336 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.28.250.14 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0219 04:15:14.808466    8336 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.28.250.14 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0219 04:15:14.808466    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yocni8.i9q8ehl2enlygy5l --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-657900-m03"
	I0219 04:15:15.173748    8336 command_runner.go:130] ! W0219 04:15:15.166852    1315 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:15:15.947479    8336 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:15:17.768264    8336 command_runner.go:130] > [preflight] Running pre-flight checks
	I0219 04:15:17.768264    8336 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0219 04:15:17.768352    8336 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0219 04:15:17.768352    8336 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:15:17.768352    8336 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:15:17.768352    8336 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0219 04:15:17.768352    8336 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0219 04:15:17.768352    8336 command_runner.go:130] > This node has joined the cluster:
	I0219 04:15:17.768352    8336 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0219 04:15:17.768352    8336 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0219 04:15:17.768352    8336 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0219 04:15:17.768451    8336 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yocni8.i9q8ehl2enlygy5l --discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-657900-m03": (2.959995s)
	I0219 04:15:17.768503    8336 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0219 04:15:17.988510    8336 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0219 04:15:18.147007    8336 start.go:303] JoinCluster complete in 7.3787628s
	I0219 04:15:18.147069    8336 cni.go:84] Creating CNI manager for ""
	I0219 04:15:18.147069    8336 cni.go:136] 3 nodes found, recommending kindnet
	I0219 04:15:18.156767    8336 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0219 04:15:18.163868    8336 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0219 04:15:18.163868    8336 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0219 04:15:18.163868    8336 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0219 04:15:18.163868    8336 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0219 04:15:18.163868    8336 command_runner.go:130] > Access: 2023-02-19 04:11:42.350359200 +0000
	I0219 04:15:18.163868    8336 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0219 04:15:18.163868    8336 command_runner.go:130] > Change: 2023-02-19 04:11:32.681000000 +0000
	I0219 04:15:18.163868    8336 command_runner.go:130] >  Birth: -
	I0219 04:15:18.163868    8336 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0219 04:15:18.163868    8336 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0219 04:15:18.209012    8336 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0219 04:15:18.518359    8336 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:15:18.518414    8336 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0219 04:15:18.518469    8336 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0219 04:15:18.518469    8336 command_runner.go:130] > daemonset.apps/kindnet configured
	I0219 04:15:18.519517    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:15:18.520515    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:15:18.521411    8336 round_trippers.go:463] GET https://172.28.244.121:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0219 04:15:18.521465    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:18.521505    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:18.521505    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:18.526817    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:15:18.526817    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:18.526817    8336 round_trippers.go:580]     Audit-Id: be038e4d-dc8c-460a-8a01-88625f070e7a
	I0219 04:15:18.526817    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:18.526817    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:18.526817    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:18.526817    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:18.526817    8336 round_trippers.go:580]     Content-Length: 292
	I0219 04:15:18.526817    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:18 GMT
	I0219 04:15:18.526817    8336 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"15caddfb-a629-49c9-8b4b-8cd8e13b08e2","resourceVersion":"1261","creationTimestamp":"2023-02-19T04:00:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0219 04:15:18.526817    8336 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-657900" context rescaled to 1 replicas
	I0219 04:15:18.526817    8336 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.28.250.14 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0219 04:15:18.532705    8336 out.go:177] * Verifying Kubernetes components...
	I0219 04:15:18.543305    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:15:18.565587    8336 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:15:18.566284    8336 kapi.go:59] client config for multinode-657900: &rest.Config{Host:"https://172.28.244.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-657900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:15:18.567025    8336 node_ready.go:35] waiting up to 6m0s for node "multinode-657900-m03" to be "Ready" ...
	I0219 04:15:18.567112    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:18.567209    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:18.567209    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:18.567260    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:18.570074    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:18.570390    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:18.570390    8336 round_trippers.go:580]     Audit-Id: 10cbec33-fa2a-4e28-9015-7ec94b9a0c63
	I0219 04:15:18.570390    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:18.570390    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:18.570390    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:18.570390    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:18.570390    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:18 GMT
	I0219 04:15:18.570567    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1503","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4306 chars]
	I0219 04:15:19.078403    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:19.078403    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:19.078403    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:19.078403    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:19.083168    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:19.083558    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:19.083558    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:19.083643    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:19 GMT
	I0219 04:15:19.083643    8336 round_trippers.go:580]     Audit-Id: 2e65511c-45f8-4189-9ec6-7923d276a81c
	I0219 04:15:19.083643    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:19.083713    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:19.083713    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:19.084002    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1503","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4306 chars]
	I0219 04:15:19.584795    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:19.584969    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:19.584969    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:19.584969    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:19.590744    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:15:19.591391    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:19.591391    8336 round_trippers.go:580]     Audit-Id: e8b39037-2ca0-4d6f-b117-4dfc4f8de8a0
	I0219 04:15:19.591391    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:19.591391    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:19.591391    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:19.591391    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:19.591485    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:19 GMT
	I0219 04:15:19.591571    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1503","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4306 chars]
	I0219 04:15:20.071843    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:20.072112    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:20.072112    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:20.072112    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:20.076657    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:20.076856    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:20.076900    8336 round_trippers.go:580]     Audit-Id: e81488db-8500-4e49-887f-002680c1d84e
	I0219 04:15:20.076900    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:20.076932    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:20.076932    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:20.076932    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:20.076932    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:20 GMT
	I0219 04:15:20.076932    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1503","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4306 chars]
	I0219 04:15:20.578433    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:20.578433    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:20.578433    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:20.578536    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:20.581861    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:20.581861    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:20.581947    8336 round_trippers.go:580]     Audit-Id: 2d396ef7-cd6a-4ee7-88bb-5c642f690fde
	I0219 04:15:20.581947    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:20.581947    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:20.581947    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:20.581947    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:20.581947    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:20 GMT
	I0219 04:15:20.582348    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1503","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4306 chars]
	I0219 04:15:20.582763    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:21.081808    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:21.081808    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:21.081925    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:21.081925    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:21.086199    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:21.086199    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:21.086788    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:21.086788    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:21.086788    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:21.086788    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:21 GMT
	I0219 04:15:21.086788    8336 round_trippers.go:580]     Audit-Id: d66e1336-db1a-4b6c-9f59-4ed4d88e8d08
	I0219 04:15:21.086788    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:21.087132    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:21.585916    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:21.585916    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:21.585916    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:21.585916    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:21.589503    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:21.589503    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:21.589503    8336 round_trippers.go:580]     Audit-Id: e9000707-441b-40e4-9ac5-f9ff4fc8b339
	I0219 04:15:21.589503    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:21.589503    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:21.589503    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:21.590147    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:21.590147    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:21 GMT
	I0219 04:15:21.590241    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:22.085655    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:22.085794    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:22.085794    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:22.085794    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:22.089106    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:22.089586    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:22.089586    8336 round_trippers.go:580]     Audit-Id: 9f2a5d89-b220-401e-9416-9067a990d78b
	I0219 04:15:22.089586    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:22.089586    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:22.089586    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:22.089586    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:22.089586    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:22 GMT
	I0219 04:15:22.089828    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:22.585510    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:22.585580    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:22.585580    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:22.585580    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:22.589979    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:22.589979    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:22.589979    8336 round_trippers.go:580]     Audit-Id: eef7161f-27e1-4a49-9ba6-76e79be9d779
	I0219 04:15:22.589979    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:22.589979    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:22.589979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:22.589979    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:22.589979    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:22 GMT
	I0219 04:15:22.589979    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:22.590885    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:23.084712    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:23.084876    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:23.084911    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:23.084938    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:23.088519    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:23.088519    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:23.088519    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:23.088519    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:23.088519    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:23 GMT
	I0219 04:15:23.089194    8336 round_trippers.go:580]     Audit-Id: 66dcc081-8554-48d7-9e36-8fade7555888
	I0219 04:15:23.089194    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:23.089194    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:23.089633    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:23.584556    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:23.584707    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:23.584707    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:23.584707    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:23.589025    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:23.589058    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:23.589058    8336 round_trippers.go:580]     Audit-Id: 35cc67a8-4e7d-4d5a-bb95-dd20e867d58e
	I0219 04:15:23.589058    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:23.589058    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:23.589058    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:23.589058    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:23.589058    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:23 GMT
	I0219 04:15:23.589058    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:24.086142    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:24.086142    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:24.086274    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:24.086274    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:24.090601    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:24.090601    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:24.090601    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:24.090678    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:24.090678    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:24.090678    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:24.090678    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:24 GMT
	I0219 04:15:24.090678    8336 round_trippers.go:580]     Audit-Id: 2bf3d32d-5f35-46ad-9785-fe7586432e7a
	I0219 04:15:24.090954    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:24.573317    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:24.573317    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:24.573317    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:24.573317    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:24.578962    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:15:24.578962    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:24.579011    8336 round_trippers.go:580]     Audit-Id: 7d034018-7842-459a-a772-51ed87733954
	I0219 04:15:24.579011    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:24.579011    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:24.579055    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:24.579055    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:24.579055    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:24 GMT
	I0219 04:15:24.579161    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:25.077789    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:25.078022    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:25.078022    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:25.078022    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:25.085669    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:15:25.085669    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:25.085669    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:25.085669    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:25.085669    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:25.085669    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:25.085669    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:25 GMT
	I0219 04:15:25.085669    8336 round_trippers.go:580]     Audit-Id: cc3ef881-8139-440d-b93e-a724614d0d3c
	I0219 04:15:25.085669    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:25.087250    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:25.576564    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:25.576624    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:25.576624    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:25.576624    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:25.581558    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:25.582056    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:25.582056    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:25.582056    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:25.582056    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:25.582056    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:25 GMT
	I0219 04:15:25.582213    8336 round_trippers.go:580]     Audit-Id: d7df954b-2d82-49f9-8db9-133e79ec3606
	I0219 04:15:25.582213    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:25.582382    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:26.079682    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:26.079750    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:26.079750    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:26.079750    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:26.083599    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:26.083599    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:26.083599    8336 round_trippers.go:580]     Audit-Id: 2db277f0-8901-4d49-86ea-50ba0b6a2037
	I0219 04:15:26.083599    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:26.083599    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:26.083599    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:26.084068    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:26.084109    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:26 GMT
	I0219 04:15:26.084179    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:26.580308    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:26.580308    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:26.580308    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:26.580308    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:26.584938    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:26.585234    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:26.585234    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:26.585234    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:26.585234    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:26.585234    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:26 GMT
	I0219 04:15:26.585234    8336 round_trippers.go:580]     Audit-Id: f026e028-3cee-4d0e-8e92-564d20888636
	I0219 04:15:26.585234    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:26.585486    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1520","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4415 chars]
	I0219 04:15:27.082397    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:27.082397    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:27.082397    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:27.082397    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:27.089456    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:15:27.089697    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:27.089697    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:27.089740    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:27.089740    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:27.089740    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:27.089740    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:27 GMT
	I0219 04:15:27.089740    8336 round_trippers.go:580]     Audit-Id: 76efe5e3-5b92-4555-ac67-1f7de057ce40
	I0219 04:15:27.090092    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:27.090140    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:27.583109    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:27.583172    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:27.583172    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:27.583172    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:27.585932    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:27.586971    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:27.586971    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:27.586971    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:27.586971    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:27.586971    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:27 GMT
	I0219 04:15:27.586971    8336 round_trippers.go:580]     Audit-Id: e2594c06-561f-49a1-b5e2-cabfa0c06a15
	I0219 04:15:27.587100    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:27.587258    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:28.075505    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:28.075505    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:28.075505    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:28.075505    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:28.082748    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:15:28.082748    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:28.082748    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:28 GMT
	I0219 04:15:28.082748    8336 round_trippers.go:580]     Audit-Id: 87f989c2-1402-4eea-bc49-3561fe5b4685
	I0219 04:15:28.082748    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:28.082748    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:28.082748    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:28.082748    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:28.083722    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:28.576732    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:28.576732    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:28.576732    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:28.576732    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:28.580429    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:28.580789    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:28.580789    8336 round_trippers.go:580]     Audit-Id: ca599d57-625c-46a9-b24c-11b80ed4f5c3
	I0219 04:15:28.580789    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:28.580789    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:28.580789    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:28.580789    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:28.580934    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:28 GMT
	I0219 04:15:28.581072    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:29.082731    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:29.082731    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:29.082731    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:29.082731    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:29.087303    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:29.087680    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:29.087680    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:29.087680    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:29.087772    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:29.087772    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:29 GMT
	I0219 04:15:29.087772    8336 round_trippers.go:580]     Audit-Id: ec799565-ce03-4cde-addb-37ee80486ea0
	I0219 04:15:29.087772    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:29.087772    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:29.572779    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:29.572779    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:29.572779    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:29.572779    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:29.576630    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:29.576880    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:29.576880    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:29.576880    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:29.576880    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:29.576880    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:29.576880    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:29 GMT
	I0219 04:15:29.576880    8336 round_trippers.go:580]     Audit-Id: 890c4b59-5a30-43ac-b9bb-1e9119c8e98a
	I0219 04:15:29.577105    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:29.577382    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:30.075500    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:30.075582    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:30.075582    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:30.075582    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:30.079773    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:30.079773    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:30.079773    8336 round_trippers.go:580]     Audit-Id: 59446791-7ef7-4c6b-8047-f9346d66169a
	I0219 04:15:30.079773    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:30.079773    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:30.079773    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:30.079773    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:30.079773    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:30 GMT
	I0219 04:15:30.080095    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:30.575323    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:30.575401    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:30.575401    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:30.575401    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:30.579677    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:30.579677    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:30.579677    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:30.579677    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:30.579677    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:30.579677    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:30.579677    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:30 GMT
	I0219 04:15:30.579677    8336 round_trippers.go:580]     Audit-Id: 77b18a8a-17e7-4343-86ac-be7919097f2d
	I0219 04:15:30.579677    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:31.074619    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:31.074619    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:31.074619    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:31.074619    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:31.079363    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:31.079590    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:31.079590    8336 round_trippers.go:580]     Audit-Id: ed735f75-e5db-43e8-87f2-ef2793a1f8ac
	I0219 04:15:31.079590    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:31.079590    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:31.079680    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:31.079680    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:31.079734    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:31 GMT
	I0219 04:15:31.079997    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:31.577312    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:31.577383    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:31.577383    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:31.577383    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:31.586121    8336 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0219 04:15:31.586193    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:31.586193    8336 round_trippers.go:580]     Audit-Id: 2830a121-dde1-4ee9-a586-ffa8f0038596
	I0219 04:15:31.586193    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:31.586236    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:31.586236    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:31.586287    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:31.586287    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:31 GMT
	I0219 04:15:31.586287    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:31.586287    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:32.081231    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:32.081231    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:32.081231    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:32.081231    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:32.088199    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:15:32.088199    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:32.088199    8336 round_trippers.go:580]     Audit-Id: 0b4f1f9c-ddca-4906-bb69-a1f80985bb31
	I0219 04:15:32.088199    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:32.088199    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:32.088306    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:32.088422    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:32.088467    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:32 GMT
	I0219 04:15:32.088714    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:32.582403    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:32.582403    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:32.582403    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:32.582403    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:32.585087    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:32.585087    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:32.585087    8336 round_trippers.go:580]     Audit-Id: d3160485-dda0-4c5d-9e6f-732ab7501fd0
	I0219 04:15:32.585087    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:32.585087    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:32.585981    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:32.585981    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:32.585981    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:32 GMT
	I0219 04:15:32.586369    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:33.085427    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:33.085427    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:33.085498    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:33.085498    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:33.088926    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:33.088926    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:33.089475    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:33.089475    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:33.089475    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:33.089475    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:33 GMT
	I0219 04:15:33.089475    8336 round_trippers.go:580]     Audit-Id: a673e374-814a-4e54-8442-5069c3fd0d92
	I0219 04:15:33.089475    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:33.089806    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:33.585026    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:33.585138    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:33.585138    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:33.585138    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:33.589942    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:33.589942    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:33.590193    8336 round_trippers.go:580]     Audit-Id: 9718a7b3-b0dc-4871-ab1b-9fb22607580f
	I0219 04:15:33.590193    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:33.590193    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:33.590263    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:33.590263    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:33.590263    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:33 GMT
	I0219 04:15:33.590407    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:33.591064    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:34.071021    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:34.071021    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:34.071021    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:34.071100    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:34.078128    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:15:34.078128    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:34.078128    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:34.078128    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:34 GMT
	I0219 04:15:34.078128    8336 round_trippers.go:580]     Audit-Id: 471d8c3f-2b8e-4f34-a4b6-2b4f131eb463
	I0219 04:15:34.078128    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:34.078707    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:34.078707    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:34.078928    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:34.576239    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:34.576239    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:34.576351    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:34.576351    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:34.579789    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:34.579895    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:34.579895    8336 round_trippers.go:580]     Audit-Id: d148f9f7-f400-4351-8045-9b3c815a2687
	I0219 04:15:34.579895    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:34.579952    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:34.579952    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:34.579992    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:34.579992    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:34 GMT
	I0219 04:15:34.580161    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:35.076146    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:35.076255    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:35.076255    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:35.076255    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:35.079749    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:35.080488    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:35.080488    8336 round_trippers.go:580]     Audit-Id: 04d9cac6-0358-46af-ba88-bbe77a71f761
	I0219 04:15:35.080488    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:35.080488    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:35.080488    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:35.080488    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:35.080488    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:35 GMT
	I0219 04:15:35.080488    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:35.579769    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:35.579800    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:35.579875    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:35.579904    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:35.583352    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:35.583352    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:35.583352    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:35.583352    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:35.583352    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:35.583352    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:35.583352    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:35 GMT
	I0219 04:15:35.583352    8336 round_trippers.go:580]     Audit-Id: a591bf57-6512-4f94-abd4-2b851a07d4a3
	I0219 04:15:35.583501    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:36.083396    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:36.083471    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:36.083471    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:36.083528    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:36.087261    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:36.087261    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:36.087261    8336 round_trippers.go:580]     Audit-Id: 3c3e5fd6-2dd5-4e6e-ba2a-4290971dc2ba
	I0219 04:15:36.087261    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:36.087261    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:36.087576    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:36.087576    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:36.087576    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:36 GMT
	I0219 04:15:36.087576    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:36.088106    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:36.583591    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:36.583591    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:36.583670    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:36.583670    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:36.586056    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:36.586056    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:36.586056    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:36.586056    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:36.586056    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:36.586830    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:36 GMT
	I0219 04:15:36.586830    8336 round_trippers.go:580]     Audit-Id: 8d5f63aa-b5c0-43f5-8d37-9021093d3551
	I0219 04:15:36.586830    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:36.587101    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:37.086383    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:37.086383    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:37.086383    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:37.086383    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:37.090768    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:37.090768    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:37.091639    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:37.091639    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:37.091639    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:37 GMT
	I0219 04:15:37.091639    8336 round_trippers.go:580]     Audit-Id: 8d78f5ec-c5b4-4a04-bfab-d8c0936e4e5e
	I0219 04:15:37.091639    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:37.091639    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:37.092000    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:37.571370    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:37.571370    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:37.571370    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:37.571370    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:37.574978    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:37.574978    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:37.574978    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:37.574978    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:37.575680    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:37.575680    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:37 GMT
	I0219 04:15:37.575680    8336 round_trippers.go:580]     Audit-Id: 1d8c9aa2-17d9-408f-806c-fac25265443a
	I0219 04:15:37.575680    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:37.575877    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:38.085630    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:38.085708    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:38.085708    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:38.085708    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:38.090118    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:38.090118    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:38.090118    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:38.090877    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:38.090877    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:38.090877    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:38.090937    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:38 GMT
	I0219 04:15:38.090937    8336 round_trippers.go:580]     Audit-Id: f4ff186e-e93a-43db-9049-f9e915ca211b
	I0219 04:15:38.090937    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:38.091669    8336 node_ready.go:58] node "multinode-657900-m03" has status "Ready":"False"
	I0219 04:15:38.583308    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:38.583308    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:38.583308    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:38.583308    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:38.587501    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:38.587501    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:38.587501    8336 round_trippers.go:580]     Audit-Id: 5f7d1dd2-3256-45d2-a74c-f8fa00e23d1c
	I0219 04:15:38.587501    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:38.587501    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:38.587501    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:38.587501    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:38.587501    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:38 GMT
	I0219 04:15:38.588523    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:39.072504    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:39.072504    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:39.072638    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:39.072638    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:39.075953    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:39.076709    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:39.076709    8336 round_trippers.go:580]     Audit-Id: 6e7a2c7c-2b24-4760-85bd-9ae03dec8e6f
	I0219 04:15:39.076709    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:39.076709    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:39.076709    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:39.076709    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:39.076709    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:39 GMT
	I0219 04:15:39.076709    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:39.574903    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:39.574903    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:39.574903    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:39.574903    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:39.583316    8336 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0219 04:15:39.583316    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:39.583316    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:39.583316    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:39.583316    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:39.583729    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:39 GMT
	I0219 04:15:39.583729    8336 round_trippers.go:580]     Audit-Id: d53e05c8-5acc-404c-9a02-c62e4b6a6304
	I0219 04:15:39.583729    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:39.583854    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1526","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4584 chars]
	I0219 04:15:40.082748    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:40.082748    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.082840    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.082840    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.085651    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:40.085651    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.085651    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.085651    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.085651    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.085651    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.085651    8336 round_trippers.go:580]     Audit-Id: ebf1119a-c22d-4995-938d-72bfb5208543
	I0219 04:15:40.085651    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.086223    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1548","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4450 chars]
	I0219 04:15:40.086844    8336 node_ready.go:49] node "multinode-657900-m03" has status "Ready":"True"
	I0219 04:15:40.086882    8336 node_ready.go:38] duration metric: took 21.5198409s waiting for node "multinode-657900-m03" to be "Ready" ...
	I0219 04:15:40.086910    8336 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:15:40.086980    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods
	I0219 04:15:40.086980    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.086980    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.086980    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.092418    8336 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0219 04:15:40.092418    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.092418    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.092418    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.092418    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.092418    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.092418    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.092418    8336 round_trippers.go:580]     Audit-Id: 0c67dbd3-d5ef-463d-85ec-ceaf887707fd
	I0219 04:15:40.094809    8336 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1548"},"items":[{"metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1257","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82874 chars]
	I0219 04:15:40.101551    8336 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.101551    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9mvfg
	I0219 04:15:40.101551    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.101551    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.101551    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.103272    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:15:40.104289    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.104289    8336 round_trippers.go:580]     Audit-Id: 03f15064-6a69-4dc6-87d2-3daccb11c73b
	I0219 04:15:40.104335    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.104335    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.104335    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.104335    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.104374    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.104399    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9mvfg","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"38bce706-085e-44e0-bf5e-97cbdebb682e","resourceVersion":"1257","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"20e1c2f5-8c18-439e-bab0-5e548a848df0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20e1c2f5-8c18-439e-bab0-5e548a848df0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0219 04:15:40.105175    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:40.105175    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.105258    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.105258    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.106991    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:15:40.106991    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.106991    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.106991    8336 round_trippers.go:580]     Audit-Id: 5c5a0874-054c-44d6-a809-24b1925df787
	I0219 04:15:40.106991    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.106991    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.107811    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.107811    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.107853    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:15:40.108439    8336 pod_ready.go:92] pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:40.108524    8336 pod_ready.go:81] duration metric: took 6.8881ms waiting for pod "coredns-787d4945fb-9mvfg" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.108524    8336 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.108584    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-657900
	I0219 04:15:40.108630    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.108664    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.108664    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.110352    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:15:40.110352    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.110352    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.110352    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.110352    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.110352    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.110352    8336 round_trippers.go:580]     Audit-Id: 48874d39-ad70-4002-a616-69bdcd7d25dd
	I0219 04:15:40.110352    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.111442    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-657900","namespace":"kube-system","uid":"e77b4ae1-9bb6-48e7-a39d-b91eaa2fbe32","resourceVersion":"1229","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.244.121:2379","kubernetes.io/config.hash":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.mirror":"cf2e032f8176f837f5bcf073190e4313","kubernetes.io/config.seen":"2023-02-19T04:12:18.622144946Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5857 chars]
	I0219 04:15:40.111442    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:40.111442    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.111442    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.111442    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.114283    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:40.114403    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.114403    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.114403    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.114403    8336 round_trippers.go:580]     Audit-Id: 281d2cf1-85bd-4b8a-bb95-adb9fb7c625b
	I0219 04:15:40.114403    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.114495    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.114495    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.114578    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:15:40.115160    8336 pod_ready.go:92] pod "etcd-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:40.115254    8336 pod_ready.go:81] duration metric: took 6.73ms waiting for pod "etcd-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.115254    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.115398    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-657900
	I0219 04:15:40.115398    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.115398    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.115398    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.117325    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:15:40.117325    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.117325    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.117325    8336 round_trippers.go:580]     Audit-Id: 3796ac46-9351-4d13-982e-4ee33ceaf71f
	I0219 04:15:40.117325    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.117325    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.117325    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.117325    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.118562    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-657900","namespace":"kube-system","uid":"e47db067-f2ff-412b-954f-0b6b6cf42f8b","resourceVersion":"1186","creationTimestamp":"2023-02-19T04:12:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.244.121:8443","kubernetes.io/config.hash":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.mirror":"64d9d1395b6e25aebebbf4adfc03e069","kubernetes.io/config.seen":"2023-02-19T04:12:18.621131732Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7393 chars]
	I0219 04:15:40.119235    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:40.119269    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.119310    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.119310    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.122473    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:40.122473    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.122473    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.122473    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.122584    8336 round_trippers.go:580]     Audit-Id: 1878dc95-6d33-4730-9a2d-7550454b31b5
	I0219 04:15:40.122584    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.122584    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.122630    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.122660    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:15:40.123418    8336 pod_ready.go:92] pod "kube-apiserver-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:40.123418    8336 pod_ready.go:81] duration metric: took 8.1643ms waiting for pod "kube-apiserver-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.123418    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.123418    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-657900
	I0219 04:15:40.123418    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.123418    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.123418    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.127241    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:40.127241    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.127241    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.127365    8336 round_trippers.go:580]     Audit-Id: 48b6d2d4-85b7-4078-95d0-1afbdf687531
	I0219 04:15:40.127365    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.127365    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.127365    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.127410    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.127675    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-657900","namespace":"kube-system","uid":"463b901e-dd04-46fc-91a3-9917b12590ff","resourceVersion":"1192","creationTimestamp":"2023-02-19T04:00:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.mirror":"7cd5ea91854c20d0b081e1be96fa370f","kubernetes.io/config.seen":"2023-02-19T04:00:19.445306645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7163 chars]
	I0219 04:15:40.128261    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:40.128261    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.128343    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.128343    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.130535    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:40.130535    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.130535    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.130535    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.130535    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.130535    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.130535    8336 round_trippers.go:580]     Audit-Id: 4762ef27-28e0-43b7-94fb-4c44f393e480
	I0219 04:15:40.130535    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.131733    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:15:40.131853    8336 pod_ready.go:92] pod "kube-controller-manager-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:40.131853    8336 pod_ready.go:81] duration metric: took 8.4352ms waiting for pod "kube-controller-manager-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.131853    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.287513    8336 request.go:622] Waited for 155.3842ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:15:40.287793    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h9z4
	I0219 04:15:40.287793    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.287793    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.287793    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.289162    8336 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0219 04:15:40.289162    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.289162    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.289162    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.289162    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.289162    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.289162    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.289162    8336 round_trippers.go:580]     Audit-Id: 9bb380b8-6ae0-463b-a2c2-3454927acd1c
	I0219 04:15:40.295765    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h9z4","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ff10d29-0b2a-4046-a946-90b1a4d8bcb7","resourceVersion":"1392","creationTimestamp":"2023-02-19T04:02:22Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0219 04:15:40.489371    8336 request.go:622] Waited for 193.1215ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:15:40.489660    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m02
	I0219 04:15:40.489660    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.489660    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.489660    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.496271    8336 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0219 04:15:40.496271    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.496271    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.496271    8336 round_trippers.go:580]     Audit-Id: bbafc1eb-6f07-484a-af96-9167b372d71b
	I0219 04:15:40.496271    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.496271    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.496271    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.496271    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.497103    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m02","uid":"cfddd6bb-6419-4ad1-90b2-f255cdffca5d","resourceVersion":"1410","creationTimestamp":"2023-02-19T04:13:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4499 chars]
	I0219 04:15:40.497807    8336 pod_ready.go:92] pod "kube-proxy-8h9z4" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:40.497867    8336 pod_ready.go:81] duration metric: took 365.9556ms waiting for pod "kube-proxy-8h9z4" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.497867    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.692058    8336 request.go:622] Waited for 194.1916ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:15:40.692180    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcm8m
	I0219 04:15:40.692180    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.692180    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.692180    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.702441    8336 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0219 04:15:40.702441    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.702441    8336 round_trippers.go:580]     Audit-Id: 2f04cfab-56d2-4e89-8ef6-8c5e0c38dc39
	I0219 04:15:40.702948    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.702948    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.702948    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.702948    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.702948    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.703033    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kcm8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ce14b4f-6df3-4822-ac2b-06f3417e8eaa","resourceVersion":"1198","creationTimestamp":"2023-02-19T04:00:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0219 04:15:40.895246    8336 request.go:622] Waited for 191.1255ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:40.895451    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:40.895451    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:40.895451    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:40.895451    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:40.899164    8336 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0219 04:15:40.899279    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:40.899279    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:40.899279    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:40.899350    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:40.899382    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:40 GMT
	I0219 04:15:40.899382    8336 round_trippers.go:580]     Audit-Id: 3ecb439e-f288-4d85-a07c-baaf7fb5aac0
	I0219 04:15:40.899382    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:40.899530    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:15:40.900077    8336 pod_ready.go:92] pod "kube-proxy-kcm8m" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:40.900077    8336 pod_ready.go:81] duration metric: took 402.2112ms waiting for pod "kube-proxy-kcm8m" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:40.900077    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:41.097782    8336 request.go:622] Waited for 197.4126ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:15:41.097878    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5vsl
	I0219 04:15:41.097878    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:41.097878    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:41.098003    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:41.101893    8336 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0219 04:15:41.101893    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:41.101893    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:41 GMT
	I0219 04:15:41.101893    8336 round_trippers.go:580]     Audit-Id: f0c538a2-9cbb-4d2b-883b-014181a8d897
	I0219 04:15:41.101893    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:41.101893    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:41.101994    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:41.101994    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:41.102272    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n5vsl","generateName":"kube-proxy-","namespace":"kube-system","uid":"8757301c-e7d4-4784-8e1b-8e1f24aeabcd","resourceVersion":"1515","creationTimestamp":"2023-02-19T04:05:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"86ae75b5-707b-4d98-a30e-e970d37cba85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:05:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ae75b5-707b-4d98-a30e-e970d37cba85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0219 04:15:41.285585    8336 request.go:622] Waited for 182.4055ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:41.285885    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900-m03
	I0219 04:15:41.285885    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:41.285885    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:41.285885    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:41.289936    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:41.289936    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:41.289936    8336 round_trippers.go:580]     Audit-Id: 6c28b12f-c3e2-4973-a6cf-0e51803acc6d
	I0219 04:15:41.289936    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:41.289936    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:41.289936    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:41.290219    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:41.290219    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:41 GMT
	I0219 04:15:41.290290    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900-m03","uid":"1a3c9dfd-5072-4537-bc65-7a5e884544b8","resourceVersion":"1550","creationTimestamp":"2023-02-19T04:15:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:15:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
	I0219 04:15:41.290817    8336 pod_ready.go:92] pod "kube-proxy-n5vsl" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:41.290817    8336 pod_ready.go:81] duration metric: took 390.7407ms waiting for pod "kube-proxy-n5vsl" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:41.290817    8336 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:41.489011    8336 request.go:622] Waited for 197.802ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:15:41.489090    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-657900
	I0219 04:15:41.489090    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:41.489090    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:41.489090    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:41.493643    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:41.493643    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:41.493643    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:41.493643    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:41 GMT
	I0219 04:15:41.493824    8336 round_trippers.go:580]     Audit-Id: f2d6baf1-930e-47cf-b11e-3ce368d228cc
	I0219 04:15:41.493824    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:41.493824    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:41.493824    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:41.493824    8336 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-657900","namespace":"kube-system","uid":"ba38eff9-ab82-463a-bb6f-8af5e4599f15","resourceVersion":"1223","creationTimestamp":"2023-02-19T04:00:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.mirror":"d67ab919dfafdb0eecec781e708349ff","kubernetes.io/config.seen":"2023-02-19T04:00:19.445308045Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-19T04:00:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4893 chars]
	I0219 04:15:41.693688    8336 request.go:622] Waited for 199.147ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:41.694047    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes/multinode-657900
	I0219 04:15:41.694047    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:41.694047    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:41.694047    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:41.701267    8336 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0219 04:15:41.701267    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:41.701267    8336 round_trippers.go:580]     Audit-Id: b64505e3-7060-422b-9953-70369260e4f2
	I0219 04:15:41.701267    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:41.701267    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:41.701267    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:41.701267    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:41.701267    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:41 GMT
	I0219 04:15:41.701267    8336 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-02-19T04:00:15Z","fieldsType":"FieldsV1","f [truncated 5394 chars]
	I0219 04:15:41.702364    8336 pod_ready.go:92] pod "kube-scheduler-multinode-657900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:15:41.702364    8336 pod_ready.go:81] duration metric: took 411.549ms waiting for pod "kube-scheduler-multinode-657900" in "kube-system" namespace to be "Ready" ...
	I0219 04:15:41.702364    8336 pod_ready.go:38] duration metric: took 1.6154293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:15:41.702364    8336 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:15:41.712672    8336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:15:41.734540    8336 system_svc.go:56] duration metric: took 32.1756ms WaitForService to wait for kubelet.
	I0219 04:15:41.734540    8336 kubeadm.go:578] duration metric: took 23.2077998s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:15:41.734677    8336 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:15:41.896825    8336 request.go:622] Waited for 161.8884ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.244.121:8443/api/v1/nodes
	I0219 04:15:41.896988    8336 round_trippers.go:463] GET https://172.28.244.121:8443/api/v1/nodes
	I0219 04:15:41.897059    8336 round_trippers.go:469] Request Headers:
	I0219 04:15:41.897059    8336 round_trippers.go:473]     Accept: application/json, */*
	I0219 04:15:41.897059    8336 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0219 04:15:41.901533    8336 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0219 04:15:41.901533    8336 round_trippers.go:577] Response Headers:
	I0219 04:15:41.902042    8336 round_trippers.go:580]     Cache-Control: no-cache, private
	I0219 04:15:41.902042    8336 round_trippers.go:580]     Content-Type: application/json
	I0219 04:15:41.902042    8336 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4be217c8-c9f2-49b4-850c-9836a2857f4c
	I0219 04:15:41.902042    8336 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d139d53c-4b11-4c04-87c0-06238ced0b35
	I0219 04:15:41.902042    8336 round_trippers.go:580]     Date: Sun, 19 Feb 2023 04:15:41 GMT
	I0219 04:15:41.902042    8336 round_trippers.go:580]     Audit-Id: efbcf435-e94a-4afd-9be3-377acccadb67
	I0219 04:15:41.902693    8336 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1551"},"items":[{"metadata":{"name":"multinode-657900","uid":"e798d376-68c6-4b77-bfab-650f9f4b6337","resourceVersion":"1230","creationTimestamp":"2023-02-19T04:00:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-657900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b522747fea7d12101d906a75c46b71d9d9f96e61","minikube.k8s.io/name":"multinode-657900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_19T04_00_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16261 chars]
	I0219 04:15:41.903730    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:15:41.903799    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:15:41.903799    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:15:41.903799    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:15:41.903799    8336 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:15:41.903868    8336 node_conditions.go:123] node cpu capacity is 2
	I0219 04:15:41.903868    8336 node_conditions.go:105] duration metric: took 169.1921ms to run NodePressure ...
	I0219 04:15:41.903868    8336 start.go:228] waiting for startup goroutines ...
	I0219 04:15:41.903939    8336 start.go:242] writing updated cluster config ...
	I0219 04:15:41.915877    8336 ssh_runner.go:195] Run: rm -f paused
	I0219 04:15:42.101144    8336 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:15:42.107030    8336 out.go:177] 
	W0219 04:15:42.111887    8336 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:15:42.115791    8336 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:15:42.123790    8336 out.go:177] * Done! kubectl is now configured to use "multinode-657900" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-02-19 04:11:35 UTC, ends at Sun 2023-02-19 04:15:50 UTC. --
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.143203723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.143369821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.143388221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.143601817Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/154034b4d97c213b01ac06aef96013ac2d24eaad61f87d781bdb2dec5373720a pid=3896 runtime=io.containerd.runc.v2
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.890382094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.890671890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.890812787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:12:44 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:44.892063068Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4139a72a12f93d0a91f5d9b567e07147821ec8fa1abe28441b717cfff3c5576e pid=4008 runtime=io.containerd.runc.v2
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.064881695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.064979693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.065000893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.065968479Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1743bacf9379ef4c1306426d6d730efd44a0ef99519a7426465d1fd4f3ff0a95 pid=4053 runtime=io.containerd.runc.v2
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.888990763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.889225259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.889320258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:12:45 multinode-657900 dockerd[1019]: time="2023-02-19T04:12:45.889703252Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f2950f6bcb84ea1c8a0813801914ca1106c31222379fac6549d453f01ce6f144 pid=4148 runtime=io.containerd.runc.v2
	Feb 19 04:13:01 multinode-657900 dockerd[1013]: time="2023-02-19T04:13:01.156722827Z" level=info msg="ignoring event" container=b7bda78f189c01b42d0feae25386ea3125a1d80c4182f9e52dd1fbe66480ef6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 19 04:13:01 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:01.158223941Z" level=info msg="shim disconnected" id=b7bda78f189c01b42d0feae25386ea3125a1d80c4182f9e52dd1fbe66480ef6e
	Feb 19 04:13:01 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:01.158476844Z" level=warning msg="cleaning up after shim disconnected" id=b7bda78f189c01b42d0feae25386ea3125a1d80c4182f9e52dd1fbe66480ef6e namespace=moby
	Feb 19 04:13:01 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:01.158506244Z" level=info msg="cleaning up dead shim"
	Feb 19 04:13:01 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:01.174115996Z" level=warning msg="cleanup warnings time=\"2023-02-19T04:13:01Z\" level=info msg=\"starting signal loop\" namespace=moby pid=4450 runtime=io.containerd.runc.v2\n"
	Feb 19 04:13:13 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:13.938412915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:13:13 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:13.938598517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:13:13 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:13.938674017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:13:13 multinode-657900 dockerd[1019]: time="2023-02-19T04:13:13.940167428Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/29ea0e8c22e3a24045b0e25fc877745d3b054a5b675c03044e5955b679b7586a pid=4635 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	29ea0e8c22e3a       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   e2cfae71205f4
	f2950f6bcb84e       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   1743bacf9379e
	4139a72a12f93       5185b96f0becf                                                                                         3 minutes ago       Running             coredns                   1                   154034b4d97c2
	ca92004001225       d6e3e26021b60                                                                                         3 minutes ago       Running             kindnet-cni               1                   33ce0bbd4136a
	b7bda78f189c0       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   e2cfae71205f4
	4549704ae403b       46a6bb3c77ce0                                                                                         3 minutes ago       Running             kube-proxy                1                   c6c5685500d34
	5300b5f408831       deb04688c4a35                                                                                         3 minutes ago       Running             kube-apiserver            0                   66382011839a6
	e74be77c0722f       655493523f607                                                                                         3 minutes ago       Running             kube-scheduler            1                   4fb74a35707ac
	7b27493d59e0f       fce326961ae2d                                                                                         3 minutes ago       Running             etcd                      0                   295c02132f9b9
	04ad7ad7aaca2       e9c08e11b07f6                                                                                         3 minutes ago       Running             kube-controller-manager   1                   af9d82741b89d
	9a54a7d3eef7d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   d42b0d327c16f
	0eb749d12a495       5185b96f0becf                                                                                         15 minutes ago      Exited              coredns                   0                   addeabc5c2e04
	3cc329202fb1e       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              15 minutes ago      Exited              kindnet-cni               0                   7c26aac822f9e
	ca0d83d4d696e       46a6bb3c77ce0                                                                                         15 minutes ago      Exited              kube-proxy                0                   35c5df6e4d7f1
	2f34e1aaa1b5f       655493523f607                                                                                         15 minutes ago      Exited              kube-scheduler            0                   9cad608b4ab6e
	105abb87f41ff       e9c08e11b07f6                                                                                         15 minutes ago      Exited              kube-controller-manager   0                   a766f49230c1f
	
	* 
	* ==> coredns [0eb749d12a49] <==
	* [INFO] 10.244.0.3:55999 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000196597s
	[INFO] 10.244.0.3:36128 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109599s
	[INFO] 10.244.0.3:33835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154698s
	[INFO] 10.244.0.3:46693 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000138799s
	[INFO] 10.244.0.3:40538 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211898s
	[INFO] 10.244.0.3:40861 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069599s
	[INFO] 10.244.0.3:43027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147798s
	[INFO] 10.244.1.2:41835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215397s
	[INFO] 10.244.1.2:60791 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068899s
	[INFO] 10.244.1.2:42879 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144698s
	[INFO] 10.244.1.2:52603 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197897s
	[INFO] 10.244.0.3:53656 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244797s
	[INFO] 10.244.0.3:52084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000363795s
	[INFO] 10.244.0.3:35462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115198s
	[INFO] 10.244.0.3:56378 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202797s
	[INFO] 10.244.1.2:54357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227497s
	[INFO] 10.244.1.2:36124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112098s
	[INFO] 10.244.1.2:48224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084699s
	[INFO] 10.244.1.2:56851 - 5 "PTR IN 1.240.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093899s
	[INFO] 10.244.0.3:44657 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000399095s
	[INFO] 10.244.0.3:49393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249097s
	[INFO] 10.244.0.3:54475 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181598s
	[INFO] 10.244.0.3:51210 - 5 "PTR IN 1.240.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000505495s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [4139a72a12f9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = dc373b1a880fdd4ccb700cff30600cc4bf8c50378309c853254a8500867351a3e9142cc9578843a443961b28e6690d646b490f89e043822a41fbe79aabc9a951
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:54952 - 58564 "HINFO IN 595027684513331691.7305585036583297433. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.0538785s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-657900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-657900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61
	                    minikube.k8s.io/name=multinode-657900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_19T04_00_21_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:00:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-657900
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:15:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:12:38 +0000   Sun, 19 Feb 2023 04:00:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:12:38 +0000   Sun, 19 Feb 2023 04:00:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:12:38 +0000   Sun, 19 Feb 2023 04:00:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:12:38 +0000   Sun, 19 Feb 2023 04:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.244.121
	  Hostname:    multinode-657900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 e97d9145644b48ce9f643c4066851a2f
	  System UUID:                1ab1fdf1-fba4-7b4d-9307-f55ed7af7e26
	  Boot ID:                    4567eccd-7719-4524-ae1c-fecdf05e518f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-xg2wx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-787d4945fb-9mvfg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-multinode-657900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m24s
	  kube-system                 kindnet-lvjng                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-657900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-controller-manager-multinode-657900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-kcm8m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-657900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-657900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-657900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x4 over 15m)      kubelet          Node multinode-657900 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     15m                    kubelet          Node multinode-657900 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  15m                    kubelet          Node multinode-657900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                    kubelet          Node multinode-657900 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           15m                    node-controller  Node multinode-657900 event: Registered Node multinode-657900 in Controller
	  Normal  NodeReady                15m                    kubelet          Node multinode-657900 status is now: NodeReady
	  Normal  Starting                 3m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m33s (x8 over 3m33s)  kubelet          Node multinode-657900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m33s (x8 over 3m33s)  kubelet          Node multinode-657900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m33s (x7 over 3m33s)  kubelet          Node multinode-657900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node multinode-657900 event: Registered Node multinode-657900 in Controller
	
	
	Name:               multinode-657900-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-657900-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:13:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-657900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:15:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:14:06 +0000   Sun, 19 Feb 2023 04:13:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:14:06 +0000   Sun, 19 Feb 2023 04:13:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:14:06 +0000   Sun, 19 Feb 2023 04:13:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:14:06 +0000   Sun, 19 Feb 2023 04:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.250.48
	  Hostname:    multinode-657900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5f8cffb1cd44b5bba4024baad38c032
	  System UUID:                9d847d5f-b13d-1b42-8a73-2f59d1ebf938
	  Boot ID:                    29224343-efcd-4010-9408-0144c66d69eb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-5w5b7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kindnet-fp2c9               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-8h9z4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13m                  kube-proxy       
	  Normal  Starting                 111s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)    kubelet          Node multinode-657900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)    kubelet          Node multinode-657900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)    kubelet          Node multinode-657900-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 13m                  kubelet          Starting kubelet.
	  Normal  NodeReady                13m                  kubelet          Node multinode-657900-m02 status is now: NodeReady
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x2 over 115s)  kubelet          Node multinode-657900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x2 over 115s)  kubelet          Node multinode-657900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x2 over 115s)  kubelet          Node multinode-657900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                 node-controller  Node multinode-657900-m02 event: Registered Node multinode-657900-m02 in Controller
	  Normal  NodeReady                105s                 kubelet          Node multinode-657900-m02 status is now: NodeReady
	
	
	Name:               multinode-657900-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-657900-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:15:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-657900-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:15:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:15:39 +0000   Sun, 19 Feb 2023 04:15:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:15:39 +0000   Sun, 19 Feb 2023 04:15:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:15:39 +0000   Sun, 19 Feb 2023 04:15:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:15:39 +0000   Sun, 19 Feb 2023 04:15:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.250.14
	  Hostname:    multinode-657900-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 15ad3c48f24f4becb9bdf4d5191e9bb4
	  System UUID:                daeeb10e-1311-f048-981f-2e7f16ed0b86
	  Boot ID:                    e25f8c9c-47f4-44da-a913-f92b8e8933ff
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zvk4x       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-n5vsl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m2s                 kube-proxy       
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 31s                  kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)    kubelet          Node multinode-657900-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)    kubelet          Node multinode-657900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)    kubelet          Node multinode-657900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                10m                  kubelet          Node multinode-657900-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    6m5s (x2 over 6m5s)  kubelet          Node multinode-657900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x2 over 6m5s)  kubelet          Node multinode-657900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m5s (x2 over 6m5s)  kubelet          Node multinode-657900-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m5s                 kubelet          Starting kubelet.
	  Normal  NodeReady                5m56s                kubelet          Node multinode-657900-m03 status is now: NodeReady
	  Normal  Starting                 35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x2 over 35s)    kubelet          Node multinode-657900-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x2 over 35s)    kubelet          Node multinode-657900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x2 over 35s)    kubelet          Node multinode-657900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                  node-controller  Node multinode-657900-m03 event: Registered Node multinode-657900-m03 in Controller
	  Normal  NodeReady                12s                  kubelet          Node multinode-657900-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	*               on the kernel command line
	[  +0.000640] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.623795] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.227614] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.176533] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.012972] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +17.227634] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.164473] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[Feb19 04:12] systemd-fstab-generator[945]: Ignoring "noauto" for root device
	[  +0.525534] systemd-fstab-generator[980]: Ignoring "noauto" for root device
	[  +0.179306] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[  +0.202252] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +1.465037] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.407036] systemd-fstab-generator[1169]: Ignoring "noauto" for root device
	[  +0.191211] systemd-fstab-generator[1180]: Ignoring "noauto" for root device
	[  +0.180041] systemd-fstab-generator[1191]: Ignoring "noauto" for root device
	[  +0.184625] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[  +4.195960] systemd-fstab-generator[1415]: Ignoring "noauto" for root device
	[  +0.983409] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.845980] hrtimer: interrupt took 5681278 ns
	[  +0.319713] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.782066] kauditd_printk_skb: 16 callbacks suppressed
	
	* 
	* ==> etcd [7b27493d59e0] <==
	* {"level":"info","ts":"2023-02-19T04:12:22.596Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-19T04:12:22.597Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-19T04:12:22.597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8218ad2aa5f3796a switched to configuration voters=(9374433023056116074)"}
	{"level":"info","ts":"2023-02-19T04:12:22.598Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c147ee1da8b8ae87","local-member-id":"8218ad2aa5f3796a","added-peer-id":"8218ad2aa5f3796a","added-peer-peer-urls":["https://172.28.246.233:2380"]}
	{"level":"info","ts":"2023-02-19T04:12:22.598Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c147ee1da8b8ae87","local-member-id":"8218ad2aa5f3796a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-19T04:12:22.598Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-19T04:12:22.604Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-19T04:12:22.606Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8218ad2aa5f3796a","initial-advertise-peer-urls":["https://172.28.244.121:2380"],"listen-peer-urls":["https://172.28.244.121:2380"],"advertise-client-urls":["https://172.28.244.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.244.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-19T04:12:22.606Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-19T04:12:22.607Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.28.244.121:2380"}
	{"level":"info","ts":"2023-02-19T04:12:22.607Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.28.244.121:2380"}
	{"level":"info","ts":"2023-02-19T04:12:24.028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8218ad2aa5f3796a is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-19T04:12:24.029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8218ad2aa5f3796a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-19T04:12:24.029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8218ad2aa5f3796a received MsgPreVoteResp from 8218ad2aa5f3796a at term 2"}
	{"level":"info","ts":"2023-02-19T04:12:24.029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8218ad2aa5f3796a became candidate at term 3"}
	{"level":"info","ts":"2023-02-19T04:12:24.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8218ad2aa5f3796a received MsgVoteResp from 8218ad2aa5f3796a at term 3"}
	{"level":"info","ts":"2023-02-19T04:12:24.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8218ad2aa5f3796a became leader at term 3"}
	{"level":"info","ts":"2023-02-19T04:12:24.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8218ad2aa5f3796a elected leader 8218ad2aa5f3796a at term 3"}
	{"level":"info","ts":"2023-02-19T04:12:24.044Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8218ad2aa5f3796a","local-member-attributes":"{Name:multinode-657900 ClientURLs:[https://172.28.244.121:2379]}","request-path":"/0/members/8218ad2aa5f3796a/attributes","cluster-id":"c147ee1da8b8ae87","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-19T04:12:24.045Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-19T04:12:24.046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-19T04:12:24.046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-19T04:12:24.057Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-19T04:12:24.063Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-19T04:12:24.064Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.28.244.121:2379"}
	
	* 
	* ==> kernel <==
	*  04:15:51 up 4 min,  0 users,  load average: 0.25, 0.32, 0.16
	Linux multinode-657900 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5300b5f40883] <==
	* I0219 04:12:27.112896       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0219 04:12:27.112931       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0219 04:12:27.112958       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0219 04:12:27.112965       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0219 04:12:27.274107       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0219 04:12:27.303287       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0219 04:12:27.307518       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0219 04:12:27.312993       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0219 04:12:27.313730       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0219 04:12:27.313788       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0219 04:12:27.314141       1 cache.go:39] Caches are synced for autoregister controller
	I0219 04:12:27.314423       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0219 04:12:27.319121       1 shared_informer.go:280] Caches are synced for configmaps
	I0219 04:12:27.341124       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0219 04:12:27.698857       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0219 04:12:28.124331       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0219 04:12:28.696494       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.28.244.121 172.28.246.233]
	I0219 04:12:28.698305       1 controller.go:615] quota admission added evaluator for: endpoints
	I0219 04:12:28.712021       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0219 04:12:31.031918       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0219 04:12:31.267131       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0219 04:12:31.287650       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0219 04:12:31.436514       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0219 04:12:31.448655       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0219 04:12:48.687301       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.28.244.121]
	
	* 
	* ==> kube-controller-manager [04ad7ad7aaca] <==
	* I0219 04:12:40.921471       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:12:40.921592       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W0219 04:13:20.459317       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m03 node
	I0219 04:13:20.461691       1 event.go:294] "Event occurred" object="multinode-657900-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-657900-m02 status is now: NodeNotReady"
	I0219 04:13:20.500448       1 event.go:294] "Event occurred" object="kube-system/kindnet-fp2c9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0219 04:13:20.534265       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-brhr9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0219 04:13:20.578711       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-8h9z4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0219 04:13:20.622032       1 event.go:294] "Event occurred" object="multinode-657900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-657900-m03 status is now: NodeNotReady"
	I0219 04:13:20.641032       1 event.go:294] "Event occurred" object="kube-system/kindnet-zvk4x" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0219 04:13:20.664171       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-n5vsl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0219 04:13:51.974169       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-5w5b7"
	I0219 04:13:55.669529       1 event.go:294] "Event occurred" object="multinode-657900-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-657900-m02 event: Removing Node multinode-657900-m02 from Controller"
	I0219 04:13:56.707625       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-brhr9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-brhr9"
	W0219 04:13:56.707845       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-657900-m02" does not exist
	I0219 04:13:56.725730       1 range_allocator.go:372] Set node multinode-657900-m02 PodCIDR to [10.244.1.0/24]
	I0219 04:14:00.671027       1 event.go:294] "Event occurred" object="multinode-657900-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-657900-m02 event: Registered Node multinode-657900-m02 in Controller"
	W0219 04:14:07.007839       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	I0219 04:14:10.691163       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-brhr9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-brhr9"
	W0219 04:15:14.802585       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	I0219 04:15:15.701819       1 event.go:294] "Event occurred" object="multinode-657900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-657900-m03 event: Removing Node multinode-657900-m03 from Controller"
	W0219 04:15:16.630149       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-657900-m03" does not exist
	W0219 04:15:16.631451       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	I0219 04:15:16.689562       1 range_allocator.go:372] Set node multinode-657900-m03 PodCIDR to [10.244.2.0/24]
	I0219 04:15:20.702977       1 event.go:294] "Event occurred" object="multinode-657900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-657900-m03 event: Registered Node multinode-657900-m03 in Controller"
	W0219 04:15:39.905058       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m03 node
	
	* 
	* ==> kube-controller-manager [105abb87f41f] <==
	* I0219 04:02:22.824883       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fp2c9"
	I0219 04:02:22.825255       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8h9z4"
	W0219 04:02:27.487219       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-657900-m02. Assuming now as a timestamp.
	I0219 04:02:27.487511       1 event.go:294] "Event occurred" object="multinode-657900-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-657900-m02 event: Registered Node multinode-657900-m02 in Controller"
	W0219 04:02:37.904032       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	I0219 04:02:50.244943       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0219 04:02:50.284045       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-brhr9"
	I0219 04:02:50.312363       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-xg2wx"
	W0219 04:05:17.143407       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	W0219 04:05:17.145243       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-657900-m03" does not exist
	I0219 04:05:17.162483       1 range_allocator.go:372] Set node multinode-657900-m03 PodCIDR to [10.244.2.0/24]
	I0219 04:05:17.185368       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zvk4x"
	I0219 04:05:17.185466       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-n5vsl"
	W0219 04:05:17.535795       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-657900-m03. Assuming now as a timestamp.
	I0219 04:05:17.536031       1 event.go:294] "Event occurred" object="multinode-657900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-657900-m03 event: Registered Node multinode-657900-m03 in Controller"
	W0219 04:05:32.133500       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	W0219 04:08:52.609723       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	I0219 04:08:52.611663       1 event.go:294] "Event occurred" object="multinode-657900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-657900-m03 status is now: NodeNotReady"
	I0219 04:08:52.630166       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-n5vsl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0219 04:08:52.667795       1 event.go:294] "Event occurred" object="kube-system/kindnet-zvk4x" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0219 04:09:44.836347       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	W0219 04:09:46.157733       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-657900-m03" does not exist
	W0219 04:09:46.159735       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m02 node
	I0219 04:09:46.177828       1 range_allocator.go:372] Set node multinode-657900-m03 PodCIDR to [10.244.3.0/24]
	W0219 04:09:55.065268       1 topologycache.go:232] Can't get CPU or zone information for multinode-657900-m03 node
	
	* 
	* ==> kube-proxy [4549704ae403] <==
	* I0219 04:12:30.420687       1 node.go:163] Successfully retrieved node IP: 172.28.244.121
	I0219 04:12:30.421704       1 server_others.go:109] "Detected node IP" address="172.28.244.121"
	I0219 04:12:30.422268       1 server_others.go:535] "Using iptables proxy"
	I0219 04:12:30.799228       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:12:30.799264       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:12:30.805081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:12:30.807417       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:12:30.807436       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:12:30.809735       1 config.go:317] "Starting service config controller"
	I0219 04:12:30.811011       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:12:30.811095       1 config.go:444] "Starting node config controller"
	I0219 04:12:30.811105       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:12:30.813186       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:12:30.813198       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:12:30.911264       1 shared_informer.go:280] Caches are synced for node config
	I0219 04:12:30.911328       1 shared_informer.go:280] Caches are synced for service config
	I0219 04:12:30.913586       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [ca0d83d4d696] <==
	* I0219 04:00:34.558667       1 node.go:163] Successfully retrieved node IP: 172.28.246.233
	I0219 04:00:34.558850       1 server_others.go:109] "Detected node IP" address="172.28.246.233"
	I0219 04:00:34.559194       1 server_others.go:535] "Using iptables proxy"
	I0219 04:00:34.635353       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:00:34.635594       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:00:34.635644       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:00:34.636174       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:00:34.636196       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:00:34.638037       1 config.go:317] "Starting service config controller"
	I0219 04:00:34.638063       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:00:34.638627       1 config.go:444] "Starting node config controller"
	I0219 04:00:34.638637       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:00:34.638676       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:00:34.638685       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:00:34.738833       1 shared_informer.go:280] Caches are synced for service config
	I0219 04:00:34.738833       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:00:34.738850       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2f34e1aaa1b5] <==
	* W0219 04:00:16.535487       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0219 04:00:16.535867       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0219 04:00:16.552782       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0219 04:00:16.552819       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0219 04:00:16.579142       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.579268       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.627188       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.627590       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.693819       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.694379       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.695430       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0219 04:00:16.695471       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0219 04:00:16.703281       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0219 04:00:16.709622       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0219 04:00:16.775568       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0219 04:00:16.777199       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0219 04:00:16.830482       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0219 04:00:16.830549       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0219 04:00:16.958142       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0219 04:00:16.958279       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0219 04:00:18.851632       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:10:17.504921       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0219 04:10:17.505138       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0219 04:10:17.505339       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0219 04:10:17.505373       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [e74be77c0722] <==
	* I0219 04:12:24.687301       1 serving.go:348] Generated self-signed cert in-memory
	W0219 04:12:27.183461       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0219 04:12:27.183876       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0219 04:12:27.184170       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0219 04:12:27.184384       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0219 04:12:27.252109       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0219 04:12:27.252837       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:12:27.262886       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0219 04:12:27.264971       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0219 04:12:27.265103       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0219 04:12:27.271719       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:12:27.378493       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-02-19 04:11:35 UTC, ends at Sun 2023-02-19 04:15:51 UTC. --
	Feb 19 04:12:32 multinode-657900 kubelet[1421]: E0219 04:12:32.420471    1421 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 19 04:12:32 multinode-657900 kubelet[1421]: E0219 04:12:32.420676    1421 projected.go:198] Error preparing data for projected volume kube-api-access-qrctm for pod default/busybox-6b86dd6d48-xg2wx: object "default"/"kube-root-ca.crt" not registered
	Feb 19 04:12:32 multinode-657900 kubelet[1421]: E0219 04:12:32.420755    1421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab8a5a92-0809-4a36-80a2-d969e4a19341-kube-api-access-qrctm podName:ab8a5a92-0809-4a36-80a2-d969e4a19341 nodeName:}" failed. No retries permitted until 2023-02-19 04:12:36.420729648 +0000 UTC m=+18.500179160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qrctm" (UniqueName: "kubernetes.io/projected/ab8a5a92-0809-4a36-80a2-d969e4a19341-kube-api-access-qrctm") pod "busybox-6b86dd6d48-xg2wx" (UID: "ab8a5a92-0809-4a36-80a2-d969e4a19341") : object "default"/"kube-root-ca.crt" not registered
	Feb 19 04:12:33 multinode-657900 kubelet[1421]: I0219 04:12:33.454855    1421 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ce0bbd4136a1e11110557e1ae7511de8ebb1af43906573d9da61661733636d"
	Feb 19 04:12:33 multinode-657900 kubelet[1421]: I0219 04:12:33.487564    1421 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2cfae71205f49f9199b0b537b2cb7f216af3d0bfa8cad8d2ed5596b3268ac50"
	Feb 19 04:12:33 multinode-657900 kubelet[1421]: E0219 04:12:33.506918    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-9mvfg" podUID=38bce706-085e-44e0-bf5e-97cbdebb682e
	Feb 19 04:12:33 multinode-657900 kubelet[1421]: E0219 04:12:33.508696    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-xg2wx" podUID=ab8a5a92-0809-4a36-80a2-d969e4a19341
	Feb 19 04:12:34 multinode-657900 kubelet[1421]: E0219 04:12:34.780137    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-xg2wx" podUID=ab8a5a92-0809-4a36-80a2-d969e4a19341
	Feb 19 04:12:34 multinode-657900 kubelet[1421]: E0219 04:12:34.780456    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-9mvfg" podUID=38bce706-085e-44e0-bf5e-97cbdebb682e
	Feb 19 04:12:35 multinode-657900 kubelet[1421]: E0219 04:12:35.448517    1421 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 19 04:12:35 multinode-657900 kubelet[1421]: E0219 04:12:35.448615    1421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38bce706-085e-44e0-bf5e-97cbdebb682e-config-volume podName:38bce706-085e-44e0-bf5e-97cbdebb682e nodeName:}" failed. No retries permitted until 2023-02-19 04:12:43.448598662 +0000 UTC m=+25.528048174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/38bce706-085e-44e0-bf5e-97cbdebb682e-config-volume") pod "coredns-787d4945fb-9mvfg" (UID: "38bce706-085e-44e0-bf5e-97cbdebb682e") : object "kube-system"/"coredns" not registered
	Feb 19 04:12:36 multinode-657900 kubelet[1421]: E0219 04:12:36.457655    1421 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 19 04:12:36 multinode-657900 kubelet[1421]: E0219 04:12:36.457734    1421 projected.go:198] Error preparing data for projected volume kube-api-access-qrctm for pod default/busybox-6b86dd6d48-xg2wx: object "default"/"kube-root-ca.crt" not registered
	Feb 19 04:12:36 multinode-657900 kubelet[1421]: E0219 04:12:36.457887    1421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab8a5a92-0809-4a36-80a2-d969e4a19341-kube-api-access-qrctm podName:ab8a5a92-0809-4a36-80a2-d969e4a19341 nodeName:}" failed. No retries permitted until 2023-02-19 04:12:44.457867299 +0000 UTC m=+26.537316911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qrctm" (UniqueName: "kubernetes.io/projected/ab8a5a92-0809-4a36-80a2-d969e4a19341-kube-api-access-qrctm") pod "busybox-6b86dd6d48-xg2wx" (UID: "ab8a5a92-0809-4a36-80a2-d969e4a19341") : object "default"/"kube-root-ca.crt" not registered
	Feb 19 04:12:36 multinode-657900 kubelet[1421]: E0219 04:12:36.779364    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-xg2wx" podUID=ab8a5a92-0809-4a36-80a2-d969e4a19341
	Feb 19 04:12:36 multinode-657900 kubelet[1421]: E0219 04:12:36.779699    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-9mvfg" podUID=38bce706-085e-44e0-bf5e-97cbdebb682e
	Feb 19 04:12:38 multinode-657900 kubelet[1421]: E0219 04:12:38.782750    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-9mvfg" podUID=38bce706-085e-44e0-bf5e-97cbdebb682e
	Feb 19 04:12:38 multinode-657900 kubelet[1421]: E0219 04:12:38.783384    1421 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-xg2wx" podUID=ab8a5a92-0809-4a36-80a2-d969e4a19341
	Feb 19 04:12:38 multinode-657900 kubelet[1421]: I0219 04:12:38.868739    1421 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Feb 19 04:13:01 multinode-657900 kubelet[1421]: I0219 04:13:01.294958    1421 scope.go:115] "RemoveContainer" containerID="b0d34c23d93e6365df8eccc588f64d9f74f67fa0640152490c47508e539f9ed9"
	Feb 19 04:13:01 multinode-657900 kubelet[1421]: I0219 04:13:01.295411    1421 scope.go:115] "RemoveContainer" containerID="b7bda78f189c01b42d0feae25386ea3125a1d80c4182f9e52dd1fbe66480ef6e"
	Feb 19 04:13:01 multinode-657900 kubelet[1421]: E0219 04:13:01.295611    1421 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4fcb063a-be6a-41e8-9379-c8f7cf16a165)\"" pod="kube-system/storage-provisioner" podUID=4fcb063a-be6a-41e8-9379-c8f7cf16a165
	Feb 19 04:13:13 multinode-657900 kubelet[1421]: I0219 04:13:13.779519    1421 scope.go:115] "RemoveContainer" containerID="b7bda78f189c01b42d0feae25386ea3125a1d80c4182f9e52dd1fbe66480ef6e"
	Feb 19 04:13:18 multinode-657900 kubelet[1421]: I0219 04:13:18.730516    1421 scope.go:115] "RemoveContainer" containerID="4c9cc5564cf44062f63e90deca3363ac792a9d6397bbe0d4b6ed10fced879eb2"
	Feb 19 04:13:18 multinode-657900 kubelet[1421]: I0219 04:13:18.789884    1421 scope.go:115] "RemoveContainer" containerID="55e12988bbaef91e3bb8f58978f5b67f3fb80fb1402860bed3edfa46fc05b6d1"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-657900 -n multinode-657900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-657900 -n multinode-657900: (4.7041032s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-657900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (348.90s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (429.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.47130804.exe start -p running-upgrade-940200 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:128: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.47130804.exe start -p running-upgrade-940200 --memory=2200 --vm-driver=hyperv: (3m45.1594191s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-940200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:138: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-940200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (2m24.3421624s)

                                                
                                                
-- stdout --
	* [running-upgrade-940200] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-940200 in cluster running-upgrade-940200
	* Updating the running hyperv "running-upgrade-940200" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0219 04:41:20.259347    8496 out.go:296] Setting OutFile to fd 1660 ...
	I0219 04:41:20.328356    8496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:41:20.328356    8496 out.go:309] Setting ErrFile to fd 1620...
	I0219 04:41:20.328356    8496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:41:20.354268    8496 out.go:303] Setting JSON to false
	I0219 04:41:20.357258    8496 start.go:125] hostinfo: {"hostname":"minikube1","uptime":18669,"bootTime":1676763010,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:41:20.357258    8496 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:41:20.362286    8496 out.go:177] * [running-upgrade-940200] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:41:20.366265    8496 notify.go:220] Checking for updates...
	I0219 04:41:20.369276    8496 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:41:20.371260    8496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:41:20.374265    8496 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:41:20.378267    8496 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:41:20.384275    8496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:41:20.389256    8496 config.go:182] Loaded profile config "running-upgrade-940200": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0219 04:41:20.389256    8496 start_flags.go:687] config upgrade: Driver=hyperv
	I0219 04:41:20.389256    8496 start_flags.go:699] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0219 04:41:20.390265    8496 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-940200\config.json ...
	I0219 04:41:20.397268    8496 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0219 04:41:20.405296    8496 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:41:22.173491    8496 out.go:177] * Using the hyperv driver based on existing profile
	I0219 04:41:22.179477    8496 start.go:296] selected driver: hyperv
	I0219 04:41:22.179477    8496 start.go:857] validating driver "hyperv" against &{Name:running-upgrade-940200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.28.250.156 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
}
	I0219 04:41:22.179477    8496 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:41:22.230496    8496 cni.go:84] Creating CNI manager for ""
	I0219 04:41:22.230496    8496 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0219 04:41:22.230496    8496 start_flags.go:319] config:
	{Name:running-upgrade-940200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.28.250.156 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:41:22.231493    8496 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.243086    8496 out.go:177] * Starting control plane node running-upgrade-940200 in cluster running-upgrade-940200
	I0219 04:41:22.249090    8496 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0219 04:41:22.290925    8496 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0219 04:41:22.290925    8496 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-940200\config.json ...
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0219 04:41:22.290925    8496 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0219 04:41:22.293902    8496 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:41:22.293902    8496 start.go:364] acquiring machines lock for running-upgrade-940200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:41:22.483625    8496 cache.go:107] acquiring lock: {Name:mk846f443ad8ebb3f71dcc8a6ad332b2ccd1fb49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.483625    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 exists
	I0219 04:41:22.483625    8496 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.17.0" took 192.7001ms
	I0219 04:41:22.483625    8496 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 succeeded
	I0219 04:41:22.491644    8496 cache.go:107] acquiring lock: {Name:mk72ecb1f76555793f8c9be18fe62d4a9799d53f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.492643    8496 cache.go:107] acquiring lock: {Name:mkab5ef4697aba25176a9bbf5de0bbfc032f2317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.492643    8496 cache.go:107] acquiring lock: {Name:mkfb2624f831f02f88a5c798c7a43a1bbe61fae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.492643    8496 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.492643    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0219 04:41:22.492643    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 exists
	I0219 04:41:22.492643    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I0219 04:41:22.492643    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 exists
	I0219 04:41:22.492643    8496 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.17.0" took 201.7186ms
	I0219 04:41:22.492643    8496 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 succeeded
	I0219 04:41:22.492643    8496 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 201.7186ms
	I0219 04:41:22.492643    8496 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 201.7186ms
	I0219 04:41:22.492643    8496 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0219 04:41:22.492643    8496 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0219 04:41:22.492643    8496 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.17.0" took 201.7186ms
	I0219 04:41:22.492643    8496 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 succeeded
	I0219 04:41:22.498837    8496 cache.go:107] acquiring lock: {Name:mkee5b2ba88b1109b760d9a4a39a505ba4aef2c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.498837    8496 cache.go:107] acquiring lock: {Name:mka45a59e14b38ef0230da2ff86231ec86a62154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.498837    8496 cache.go:107] acquiring lock: {Name:mk8a34ca3f90bc9ebc6fc19a51807d5bbe286002 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:22.499142    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 exists
	I0219 04:41:22.499142    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 exists
	I0219 04:41:22.499301    8496 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.17.0" took 208.3765ms
	I0219 04:41:22.499399    8496 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 exists
	I0219 04:41:22.499399    8496 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 succeeded
	I0219 04:41:22.499399    8496 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.1" took 208.4745ms
	I0219 04:41:22.499514    8496 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 succeeded
	I0219 04:41:22.499514    8496 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns_1.6.5" took 208.5899ms
	I0219 04:41:22.499514    8496 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 succeeded
	I0219 04:41:22.499682    8496 cache.go:87] Successfully saved all images to host disk.
	I0219 04:42:50.747016    8496 start.go:368] acquired machines lock for "running-upgrade-940200" in 1m28.4534326s
	I0219 04:42:50.750838    8496 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:42:50.750893    8496 fix.go:55] fixHost starting: minikube
	I0219 04:42:50.751434    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:42:51.502525    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:42:51.502525    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:51.502525    8496 fix.go:103] recreateIfNeeded on running-upgrade-940200: state=Running err=<nil>
	W0219 04:42:51.502525    8496 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:42:51.508069    8496 out.go:177] * Updating the running hyperv "running-upgrade-940200" VM ...
	I0219 04:42:51.510403    8496 machine.go:88] provisioning docker machine ...
	I0219 04:42:51.510403    8496 buildroot.go:166] provisioning hostname "running-upgrade-940200"
	I0219 04:42:51.510403    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:42:52.236425    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:42:52.236637    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:52.236637    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:42:53.361549    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:42:53.361635    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:53.365988    8496 main.go:141] libmachine: Using SSH client type: native
	I0219 04:42:53.367023    8496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.156 22 <nil> <nil>}
	I0219 04:42:53.367023    8496 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-940200 && echo "running-upgrade-940200" | sudo tee /etc/hostname
	I0219 04:42:53.524315    8496 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-940200
	
	I0219 04:42:53.524315    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:42:54.315146    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:42:54.315300    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:54.315385    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:42:55.521803    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:42:55.521803    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:55.524787    8496 main.go:141] libmachine: Using SSH client type: native
	I0219 04:42:55.525799    8496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.156 22 <nil> <nil>}
	I0219 04:42:55.525799    8496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-940200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-940200/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-940200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:42:55.684620    8496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:42:55.684620    8496 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:42:55.684620    8496 buildroot.go:174] setting up certificates
	I0219 04:42:55.684620    8496 provision.go:83] configureAuth start
	I0219 04:42:55.684620    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:42:56.493688    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:42:56.493688    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:56.493855    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:42:57.722368    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:42:57.722368    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:57.722368    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:42:58.481989    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:42:58.482241    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:58.482304    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:42:59.580249    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:42:59.580519    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:59.580519    8496 provision.go:138] copyHostCerts
	I0219 04:42:59.580943    8496 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:42:59.580943    8496 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:42:59.581632    8496 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:42:59.582403    8496 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:42:59.582403    8496 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:42:59.583124    8496 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:42:59.584085    8496 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:42:59.584085    8496 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:42:59.584617    8496 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:42:59.585467    8496 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-940200 san=[172.28.250.156 172.28.250.156 localhost 127.0.0.1 minikube running-upgrade-940200]
	I0219 04:42:59.705108    8496 provision.go:172] copyRemoteCerts
	I0219 04:42:59.715145    8496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:42:59.715145    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:00.455897    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:00.456875    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:00.456957    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:01.669304    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:01.669499    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:01.670045    8496 sshutil.go:53] new ssh client: &{IP:172.28.250.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-940200\id_rsa Username:docker}
	I0219 04:43:01.773462    8496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.0583237s)
	I0219 04:43:01.774458    8496 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:43:01.798773    8496 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0219 04:43:01.820527    8496 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:43:01.840688    8496 provision.go:86] duration metric: configureAuth took 6.1560907s
	I0219 04:43:01.840885    8496 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:43:01.841414    8496 config.go:182] Loaded profile config "running-upgrade-940200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0219 04:43:01.841505    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:02.580849    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:02.580849    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:02.580849    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:03.630646    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:03.630927    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:03.635657    8496 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:03.636554    8496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.156 22 <nil> <nil>}
	I0219 04:43:03.636554    8496 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:43:03.763369    8496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:43:03.763369    8496 buildroot.go:70] root file system type: tmpfs
	I0219 04:43:03.763369    8496 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:43:03.763369    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:04.503127    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:04.503177    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:04.503207    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:05.612989    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:05.613038    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:05.618874    8496 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:05.619681    8496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.156 22 <nil> <nil>}
	I0219 04:43:05.619781    8496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:43:05.768058    8496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:43:05.768193    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:06.510938    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:06.510938    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:06.510938    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:07.607351    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:07.607637    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:07.611970    8496 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:07.612572    8496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.156 22 <nil> <nil>}
	I0219 04:43:07.612572    8496 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:43:23.100374    8496 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:43:23.100468    8496 machine.go:91] provisioned docker machine in 31.5901818s
	I0219 04:43:23.100468    8496 start.go:300] post-start starting for "running-upgrade-940200" (driver="hyperv")
	I0219 04:43:23.100468    8496 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:43:23.109796    8496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:43:23.110594    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:23.875127    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:23.875127    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:23.875216    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:25.046999    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:25.046999    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:25.046999    8496 sshutil.go:53] new ssh client: &{IP:172.28.250.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-940200\id_rsa Username:docker}
	I0219 04:43:25.181033    8496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0712446s)
	I0219 04:43:25.192161    8496 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:43:25.202208    8496 info.go:137] Remote host: Buildroot 2019.02.7
	I0219 04:43:25.202208    8496 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:43:25.202894    8496 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:43:25.204074    8496 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:43:25.215386    8496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:43:25.236566    8496 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:43:25.275406    8496 start.go:303] post-start completed in 2.1749464s
	I0219 04:43:25.275406    8496 fix.go:57] fixHost completed within 34.5246406s
	I0219 04:43:25.275406    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:26.013314    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:26.013446    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:26.013503    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:27.184091    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:27.184336    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:27.188979    8496 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:27.190161    8496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.250.156 22 <nil> <nil>}
	I0219 04:43:27.190204    8496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0219 04:43:27.387257    8496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781807.365692385
	
	I0219 04:43:27.387257    8496 fix.go:207] guest clock: 1676781807.365692385
	I0219 04:43:27.387257    8496 fix.go:220] Guest: 2023-02-19 04:43:27.365692385 +0000 GMT Remote: 2023-02-19 04:43:25.2754066 +0000 GMT m=+125.145656301 (delta=2.090285785s)
	I0219 04:43:27.387257    8496 fix.go:191] guest clock delta is within tolerance: 2.090285785s
	I0219 04:43:27.387257    8496 start.go:83] releasing machines lock for "running-upgrade-940200", held for 36.6403769s
	I0219 04:43:27.388262    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:28.199188    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:28.199188    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:28.199188    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:29.309298    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:29.309298    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:29.312395    8496 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0219 04:43:29.312395    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:29.324655    8496 ssh_runner.go:195] Run: cat /version.json
	I0219 04:43:29.324655    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-940200 ).state
	I0219 04:43:30.052810    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:30.052810    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:30.052810    8496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:30.052810    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:30.052810    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:30.052810    8496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-940200 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:31.257233    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:31.257443    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:31.257968    8496 sshutil.go:53] new ssh client: &{IP:172.28.250.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-940200\id_rsa Username:docker}
	I0219 04:43:31.306716    8496 main.go:141] libmachine: [stdout =====>] : 172.28.250.156
	
	I0219 04:43:31.306716    8496 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:31.307308    8496 sshutil.go:53] new ssh client: &{IP:172.28.250.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-940200\id_rsa Username:docker}
	I0219 04:43:31.431097    8496 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.1187101s)
	I0219 04:43:31.431097    8496 ssh_runner.go:235] Completed: cat /version.json: (2.1064501s)
	W0219 04:43:31.431097    8496 start.go:396] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0219 04:43:31.440740    8496 ssh_runner.go:195] Run: systemctl --version
	I0219 04:43:31.461179    8496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:43:31.469347    8496 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:43:31.478132    8496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0219 04:43:31.496731    8496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0219 04:43:31.505781    8496 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0219 04:43:31.505841    8496 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0219 04:43:31.505949    8496 start.go:485] detecting cgroup driver to use...
	I0219 04:43:31.506126    8496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:43:31.533578    8496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0219 04:43:31.555743    8496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:43:31.565249    8496 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:43:31.575061    8496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:43:31.591969    8496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:43:31.611314    8496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:43:31.629938    8496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:43:31.650287    8496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:43:31.670490    8496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:43:31.688136    8496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:43:31.705625    8496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:43:31.723980    8496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:43:31.920515    8496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:43:31.941253    8496 start.go:485] detecting cgroup driver to use...
	I0219 04:43:31.956581    8496 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:43:31.980917    8496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:43:32.009114    8496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:43:32.082237    8496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:43:32.108272    8496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:43:32.127123    8496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:43:32.156240    8496 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:43:32.388811    8496 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:43:32.587146    8496 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:43:32.587331    8496 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:43:32.620078    8496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:43:32.836523    8496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:43:44.397582    8496 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.5600997s)
	I0219 04:43:44.401173    8496 out.go:177] 
	W0219 04:43:44.403102    8496 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0219 04:43:44.403102    8496 out.go:239] * 
	* 
	W0219 04:43:44.405123    8496 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0219 04:43:44.407155    8496 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:140: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-940200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-02-19 04:43:44.4749666 +0000 GMT m=+5296.146218701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-940200 -n running-upgrade-940200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-940200 -n running-upgrade-940200: exit status 6 (19.585054s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0219 04:43:49.410693   10552 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-940200" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-940200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-940200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-940200
E0219 04:44:05.289192   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-940200: (39.8842229s)
--- FAIL: TestRunningBinaryUpgrade (429.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (336.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-928900 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-928900 --driver=hyperv: (4m54.2159588s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-928900 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-928900 status -o json: exit status 1 (5.4569466s)
no_kubernetes_test.go:203: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p NoKubernetes-928900 status -o json" : exit status 1
no_kubernetes_test.go:210: failed to decode json from minikube status. args "out/minikube-windows-amd64.exe -p NoKubernetes-928900 status -o json". unexpected end of JSON input
no_kubernetes_test.go:102: Kubernetes status, got: %!s(<nil>), want: Running
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-928900 -n NoKubernetes-928900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-928900 -n NoKubernetes-928900: (5.7467452s)
helpers_test.go:244: <<< TestNoKubernetes/serial/StartWithK8s FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-928900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-928900 logs -n 25: (5.1241784s)
helpers_test.go:252: TestNoKubernetes/serial/StartWithK8s logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status docker --all                        |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat docker                                 |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/docker/daemon.json                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo docker                         | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | system info                                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status cri-docker                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat cri-docker                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | cri-dockerd --version                                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status containerd                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat containerd                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/containerd/config.toml                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | containerd config dump                               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status crio --all                          |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat crio --no-pager                        |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo find                           | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo crio                           | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | config                                               |                           |                   |         |                     |                     |
	| delete  | -p cilium-843300                                     | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT | 19 Feb 23 04:33 GMT |
	| start   | -p kubernetes-upgrade-803700                         | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-928900                            | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:35 GMT |
	|         | ssh docker info --format                             |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-928900                         | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:36 GMT |
	| delete  | -p offline-docker-928900                             | offline-docker-928900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:36 GMT | 19 Feb 23 04:37 GMT |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 04:33:24
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 04:33:24.633051    8340 out.go:296] Setting OutFile to fd 892 ...
	I0219 04:33:24.694756    8340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:33:24.694756    8340 out.go:309] Setting ErrFile to fd 864...
	I0219 04:33:24.694756    8340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:33:24.713738    8340 out.go:303] Setting JSON to false
	I0219 04:33:24.716777    8340 start.go:125] hostinfo: {"hostname":"minikube1","uptime":18194,"bootTime":1676763010,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:33:24.716972    8340 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:33:24.722641    8340 out.go:177] * [kubernetes-upgrade-803700] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:33:24.726230    8340 notify.go:220] Checking for updates...
	I0219 04:33:24.728564    8340 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:33:24.730426    8340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:33:24.733828    8340 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:33:24.736486    8340 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:33:24.738887    8340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:33:24.742919    8340 config.go:182] Loaded profile config "NoKubernetes-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:33:24.743953    8340 config.go:182] Loaded profile config "force-systemd-flag-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:33:24.745105    8340 config.go:182] Loaded profile config "offline-docker-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:33:24.745251    8340 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:33:26.445346    8340 out.go:177] * Using the hyperv driver based on user configuration
	I0219 04:33:26.448560    8340 start.go:296] selected driver: hyperv
	I0219 04:33:26.448560    8340 start.go:857] validating driver "hyperv" against <nil>
	I0219 04:33:26.448691    8340 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:33:26.498468    8340 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0219 04:33:26.499478    8340 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0219 04:33:26.499478    8340 cni.go:84] Creating CNI manager for ""
	I0219 04:33:26.499478    8340 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0219 04:33:26.499478    8340 start_flags.go:319] config:
	{Name:kubernetes-upgrade-803700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-803700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:33:26.500266    8340 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:33:26.503883    8340 out.go:177] * Starting control plane node kubernetes-upgrade-803700 in cluster kubernetes-upgrade-803700
	I0219 04:33:23.051933    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:23.051933    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:23.051933    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-flag-928900 -Count 2
	I0219 04:33:23.900669    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:23.900669    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:23.900956    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-flag-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\boot2docker.iso'
	I0219 04:33:25.095266    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:25.095266    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:25.095435    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-flag-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\disk.vhd'
	I0219 04:33:26.378805    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:26.378805    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:26.378805    5220 main.go:141] libmachine: Starting VM...
	I0219 04:33:26.378923    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-flag-928900
	I0219 04:33:26.507164    8340 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0219 04:33:26.507164    8340 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0219 04:33:26.507532    8340 cache.go:57] Caching tarball of preloaded images
	I0219 04:33:26.507717    8340 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:33:26.508012    8340 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0219 04:33:26.508178    8340 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-803700\config.json ...
	I0219 04:33:26.508431    8340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-803700\config.json: {Name:mk4ddd66e70d2fd67da04bdf61196627efe592a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:33:26.508699    8340 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:33:26.508699    8340 start.go:364] acquiring machines lock for kubernetes-upgrade-803700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:33:28.104143    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:28.104324    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:28.104324    5220 main.go:141] libmachine: Waiting for host to start...
	I0219 04:33:28.104411    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:28.839826    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:28.839885    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:28.840139    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:29.905807    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:29.905807    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:30.907963    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:31.638672    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:31.639004    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:31.639071    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:32.664215    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:32.664265    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:33.667542    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:34.410306    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:34.410383    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:34.410383    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:35.409546    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:35.409546    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:36.422568    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:37.153085    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:37.153177    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:37.153248    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:38.196401    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:38.196621    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:39.199974    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:39.895640    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:39.895890    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:39.895976    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:40.895710    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:40.895740    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:41.897989    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:42.614860    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:42.614994    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:42.614994    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:43.614691    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:43.614691    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:44.629676    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:45.345881    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:45.345932    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:45.345932    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:46.365855    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:46.365941    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:47.380564    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:48.094384    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:48.094433    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:48.094433    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:49.073355    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:49.073555    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:50.074835    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:50.771127    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:50.771127    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:50.771534    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:51.748668    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:51.748668    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:52.749928    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:53.487679    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:53.487751    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:53.487751    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:54.549580    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:33:54.549580    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:54.549580    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:55.279562    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:55.279562    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:55.279562    5220 machine.go:88] provisioning docker machine ...
	I0219 04:33:55.279562    5220 buildroot.go:166] provisioning hostname "force-systemd-flag-928900"
	I0219 04:33:55.279562    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:55.976533    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:55.976533    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:55.976679    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:56.955529    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:33:56.955620    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:56.960284    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:33:56.968397    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:33:56.968397    5220 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-928900 && echo "force-systemd-flag-928900" | sudo tee /etc/hostname
	I0219 04:33:57.127128    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-928900
	
	I0219 04:33:57.127214    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:57.839697    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:57.839767    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:57.839767    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:58.905037    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:33:58.905037    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:58.911571    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:33:58.912582    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:33:58.912665    5220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-928900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-928900/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-928900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:33:59.066420    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:33:59.066420    5220 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:33:59.066420    5220 buildroot.go:174] setting up certificates
	I0219 04:33:59.066420    5220 provision.go:83] configureAuth start
	I0219 04:33:59.066420    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:59.819101    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:59.819101    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:59.819101    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:00.828229    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:00.828229    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:00.828229    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:01.524885    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:01.524885    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:01.524957    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:02.551082    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:02.551116    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:02.551116    5220 provision.go:138] copyHostCerts
	I0219 04:34:02.551116    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0219 04:34:02.551116    5220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:34:02.551645    5220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:34:02.552091    5220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:34:02.553159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0219 04:34:02.553347    5220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:34:02.553427    5220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:34:02.553802    5220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:34:02.554854    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0219 04:34:02.555103    5220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:34:02.555189    5220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:34:02.555238    5220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:34:02.556933    5220 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-928900 san=[172.28.243.54 172.28.243.54 localhost 127.0.0.1 minikube force-systemd-flag-928900]
	I0219 04:34:02.705050    5220 provision.go:172] copyRemoteCerts
	I0219 04:34:02.714030    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:34:02.714030    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:03.439844    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:03.439844    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:03.439844    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:04.426301    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:04.426565    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:04.426733    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:04.535715    5220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.8216578s)
	I0219 04:34:04.535715    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0219 04:34:04.536332    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:34:04.579113    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0219 04:34:04.579510    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0219 04:34:04.626802    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0219 04:34:04.627242    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:34:04.666400    5220 provision.go:86] duration metric: configureAuth took 5.5999509s
	I0219 04:34:04.666428    5220 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:34:04.666541    5220 config.go:182] Loaded profile config "force-systemd-flag-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:34:04.667170    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:05.373061    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:05.373061    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:05.373156    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:06.371169    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:06.371391    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:06.374891    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:06.376018    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:06.376018    5220 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:34:06.520441    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:34:06.520441    5220 buildroot.go:70] root file system type: tmpfs
	I0219 04:34:06.520441    5220 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:34:06.521009    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:07.258090    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:07.258220    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:07.258503    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:08.324853    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:08.324853    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:08.328446    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:08.330317    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:08.330456    5220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:34:08.494858    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:34:08.494934    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:09.207758    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:09.207914    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:09.208155    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:10.198870    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:10.198870    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:10.203255    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:10.204117    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:10.204195    5220 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:34:11.304154    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:34:11.304154    5220 machine.go:91] provisioned docker machine in 16.0246458s
	I0219 04:34:11.304154    5220 client.go:171] LocalClient.Create took 1m1.9476s
	I0219 04:34:11.304154    5220 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-928900" took 1m1.9476s
	I0219 04:34:11.304154    5220 start.go:300] post-start starting for "force-systemd-flag-928900" (driver="hyperv")
	I0219 04:34:11.304154    5220 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:34:11.316745    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:34:11.316745    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:12.035732    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:12.035732    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:12.035732    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:16.974791   11220 start.go:368] acquired machines lock for "offline-docker-928900" in 1m7.6112213s
	I0219 04:34:16.974791   11220 start.go:93] Provisioning new machine with config: &{Name:offline-docker-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.26.1 ClusterName:offline-docker-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:34:16.974791   11220 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 04:34:16.982626   11220 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0219 04:34:16.982626   11220 start.go:159] libmachine.API.Create for "offline-docker-928900" (driver="hyperv")
	I0219 04:34:16.982626   11220 client.go:168] LocalClient.Create starting
	I0219 04:34:16.983629   11220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:34:16.983750   11220 main.go:141] libmachine: Decoding PEM data...
	I0219 04:34:16.983750   11220 main.go:141] libmachine: Parsing certificate...
	I0219 04:34:16.983750   11220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:34:16.984369   11220 main.go:141] libmachine: Decoding PEM data...
	I0219 04:34:16.984369   11220 main.go:141] libmachine: Parsing certificate...
	I0219 04:34:16.984369   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:34:13.110873    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:13.110974    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:13.111270    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:13.223006    5220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9062669s)
	I0219 04:34:13.232694    5220 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:34:13.239326    5220 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:34:13.239418    5220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:34:13.239793    5220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:34:13.240515    5220 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:34:13.240515    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /etc/ssl/certs/101482.pem
	I0219 04:34:13.250731    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:34:13.266340    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:34:13.305602    5220 start.go:303] post-start completed in 2.0014543s
	I0219 04:34:13.308409    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:14.031697    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:14.031961    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:14.032112    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:15.088151    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:15.088151    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:15.088342    5220 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\config.json ...
	I0219 04:34:15.091208    5220 start.go:128] duration metric: createHost completed in 1m5.7415639s
	I0219 04:34:15.091294    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:15.799393    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:15.799393    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:15.799393    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:16.825754    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:16.825754    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:16.829744    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:16.831010    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:16.831010    5220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:34:16.974379    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781256.973246000
	
	I0219 04:34:16.974443    5220 fix.go:207] guest clock: 1676781256.973246000
	I0219 04:34:16.974443    5220 fix.go:220] Guest: 2023-02-19 04:34:16.973246 +0000 GMT Remote: 2023-02-19 04:34:15.0912082 +0000 GMT m=+68.042239001 (delta=1.8820378s)
	I0219 04:34:16.974544    5220 fix.go:191] guest clock delta is within tolerance: 1.8820378s
	I0219 04:34:16.974544    5220 start.go:83] releasing machines lock for "force-systemd-flag-928900", held for 1m7.6250105s
	I0219 04:34:16.974791    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:17.727401    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:17.727474    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:17.727546    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:18.805132    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:18.805300    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:18.808744    5220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:34:18.808795    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:18.817624    5220 ssh_runner.go:195] Run: cat /version.json
	I0219 04:34:18.817624    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:19.582675    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:19.583619    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:19.583619    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:19.590221    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:19.590221    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:19.590221    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:20.763846    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:20.763846    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:20.763846    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:20.783576    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:20.783576    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:20.783576    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:20.875334    5220 ssh_runner.go:235] Completed: cat /version.json: (2.0577174s)
	I0219 04:34:20.886793    5220 ssh_runner.go:195] Run: systemctl --version
	I0219 04:34:21.330614    5220 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.5218783s)
	I0219 04:34:21.340674    5220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:34:21.348893    5220 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:34:21.358782    5220 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:34:21.374760    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:34:21.392276    5220 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:34:21.432718    5220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:34:21.460822    5220 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:34:21.460991    5220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:34:21.469471    5220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:34:21.506714    5220 docker.go:630] Got preloaded images: 
	I0219 04:34:21.506714    5220 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:34:21.517310    5220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:34:21.545059    5220 ssh_runner.go:195] Run: which lz4
	I0219 04:34:21.551467    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0219 04:34:21.561569    5220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:34:21.567710    5220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:34:21.567851    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:34:17.409753   11220 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:34:17.409814   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:17.409902   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:34:18.092537   11220 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:34:18.092537   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:18.092627   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:34:18.615997   11220 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:34:18.616158   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:18.616225   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:34:20.144056   11220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:34:20.144138   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:20.146069   11220 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:34:20.545881   11220 main.go:141] libmachine: Creating SSH key...
	I0219 04:34:21.214518   11220 main.go:141] libmachine: Creating VM...
	I0219 04:34:21.214518   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:34:24.179644    5220 docker.go:594] Took 2.628136 seconds to copy over tarball
	I0219 04:34:24.191171    5220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:34:22.792254   11220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:34:22.792377   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:22.792377   11220 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:34:22.792377   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:34:23.559480   11220 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:34:23.559511   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:23.559511   11220 main.go:141] libmachine: Creating VHD
	I0219 04:34:23.559511   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:34:25.339904   11220 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\fixe
	                          d.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DD08E28-24B0-412E-B53A-50C06EC6A781
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:34:25.339964   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:25.339964   11220 main.go:141] libmachine: Writing magic tar header
	I0219 04:34:25.339964   11220 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:34:25.351267   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 04:34:27.101726   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:27.102164   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:27.102164   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\disk.vhd' -SizeBytes 20000MB
	I0219 04:34:28.491356   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:28.491356   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:28.491356   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM offline-docker-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0219 04:34:34.301355   11220 main.go:141] libmachine: [stdout =====>] : 
	Name                  State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                  ----- ----------- ----------------- ------   ------             -------
	offline-docker-928900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 04:34:34.301355   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:34.301355   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName offline-docker-928900 -DynamicMemoryEnabled $false
	I0219 04:34:35.645677   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:35.645893   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:35.645893   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor offline-docker-928900 -Count 2
	I0219 04:34:36.422631   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:36.422631   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:36.422631   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName offline-docker-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\boot2docker.iso'
	I0219 04:34:34.985084    5220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.7938572s)
	I0219 04:34:34.985149    5220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:34:35.051490    5220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:34:35.069551    5220 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:34:35.112083    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:35.282470    5220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:34:37.665855    5220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.383393s)
	I0219 04:34:37.665997    5220 start.go:485] detecting cgroup driver to use...
	I0219 04:34:37.666023    5220 start.go:489] using "systemd" cgroup driver as enforced via flags
	I0219 04:34:37.666023    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:34:37.699088    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:34:37.725267    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:34:37.741490    5220 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I0219 04:34:37.746201    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0219 04:34:37.775871    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:34:37.801921    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:34:37.830765    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:34:37.861662    5220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:34:37.889904    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:34:37.919452    5220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:34:37.948024    5220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:34:37.976090    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:38.170683    5220 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:34:38.199764    5220 start.go:485] detecting cgroup driver to use...
	I0219 04:34:38.199764    5220 start.go:489] using "systemd" cgroup driver as enforced via flags
	I0219 04:34:38.210588    5220 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:34:38.242189    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:34:38.279769    5220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:34:38.325812    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:34:38.355859    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:34:38.388478    5220 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:34:38.452982    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:34:38.475416    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:34:38.520373    5220 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:34:38.690755    5220 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:34:38.850838    5220 docker.go:529] configuring docker to use "systemd" as cgroup driver...
	I0219 04:34:38.850945    5220 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0219 04:34:38.897315    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:39.074481    5220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:34:40.627257    5220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5527819s)
	I0219 04:34:40.638087    5220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:34:40.808939    5220 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:34:40.985950    5220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:34:41.175930    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:41.348019    5220 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:34:41.372505    5220 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:34:41.384586    5220 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:34:41.393723    5220 start.go:553] Will wait 60s for crictl version
	I0219 04:34:41.403768    5220 ssh_runner.go:195] Run: which crictl
	I0219 04:34:41.428807    5220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:34:41.574548    5220 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:34:41.583792    5220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:34:41.656445    5220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:34:41.703693    5220 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:34:41.703781    5220 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:34:41.726305    5220 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:34:41.726305    5220 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:34:41.737356    5220 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:34:41.743634    5220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:34:41.764011    5220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:34:41.775947    5220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:34:41.811137    5220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:34:41.811231    5220 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:34:41.820239    5220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:34:41.850261    5220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:34:41.850315    5220 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:34:41.859282    5220 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:34:41.902611    5220 cni.go:84] Creating CNI manager for ""
	I0219 04:34:41.902713    5220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:34:41.902713    5220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:34:41.902812    5220 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.243.54 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-928900 NodeName:force-systemd-flag-928900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.243.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.243.54 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:34:41.903069    5220 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.243.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "force-systemd-flag-928900"
	  kubeletExtraArgs:
	    node-ip: 172.28.243.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.243.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:34:41.903274    5220 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=force-systemd-flag-928900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.243.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:force-systemd-flag-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:34:41.912277    5220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:34:41.925629    5220 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:34:41.938451    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:34:41.953398    5220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (458 bytes)
	I0219 04:34:41.980572    5220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:34:42.010175    5220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0219 04:34:42.051484    5220 ssh_runner.go:195] Run: grep 172.28.243.54	control-plane.minikube.internal$ /etc/hosts
	I0219 04:34:42.056876    5220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.243.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:34:42.077846    5220 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900 for IP: 172.28.243.54
	I0219 04:34:42.077950    5220 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.078720    5220 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:34:42.079084    5220 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:34:42.079893    5220 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.key
	I0219 04:34:42.080070    5220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.crt with IP's: []
	I0219 04:34:42.150414    5220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.crt ...
	I0219 04:34:42.150414    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.crt: {Name:mkb85c477f88e6f9cd46fb9c3bea22727d044627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.152090    5220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.key ...
	I0219 04:34:42.152090    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.key: {Name:mk850913db40a4f95d41bc69aa74a50088d16df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.152090    5220 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec
	I0219 04:34:42.152090    5220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec with IP's: [172.28.243.54 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:34:37.512665   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:37.512665   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:37.512741   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName offline-docker-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\disk.vhd'
	I0219 04:34:38.789511   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:38.789660   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:38.789660   11220 main.go:141] libmachine: Starting VM...
	I0219 04:34:38.789710   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM offline-docker-928900
	I0219 04:34:40.428286   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:40.428469   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:40.428469   11220 main.go:141] libmachine: Waiting for host to start...
	I0219 04:34:40.428545   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:41.173008   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:41.173040   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:41.173100   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:42.224654   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:42.224725   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:42.338665    5220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec ...
	I0219 04:34:42.339665    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec: {Name:mk61eea8985d9338a76e40825c85fc75f969855d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.340935    5220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec ...
	I0219 04:34:42.340935    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec: {Name:mk86bee79e9bc4255c91c9ed2345fa7459e0068e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.341255    5220 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt
	I0219 04:34:42.348537    5220 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key
	I0219 04:34:42.350401    5220 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key
	I0219 04:34:42.350509    5220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt with IP's: []
	I0219 04:34:42.488905    5220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt ...
	I0219 04:34:42.488905    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt: {Name:mkcc084ac6037cdb1825a07c409210c107ee7920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.489745    5220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key ...
	I0219 04:34:42.489745    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key: {Name:mk1b6ef1d5e33d0f239dc57861567013b879b61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.491557    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0219 04:34:42.491557    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0219 04:34:42.491557    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0219 04:34:42.498710    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0219 04:34:42.498971    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0219 04:34:42.499133    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0219 04:34:42.499282    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0219 04:34:42.499421    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0219 04:34:42.499614    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:34:42.500328    5220 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:34:42.500328    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:34:42.500578    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:34:42.500578    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:34:42.501159    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:34:42.501159    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:34:42.501159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /usr/share/ca-certificates/101482.pem
	I0219 04:34:42.501159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:42.501159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem -> /usr/share/ca-certificates/10148.pem
	I0219 04:34:42.502400    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:34:42.540648    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:34:42.580076    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:34:42.621450    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0219 04:34:42.665256    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:34:42.704781    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:34:42.744960    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:34:42.796276    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:34:42.843697    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:34:42.884663    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:34:42.933909    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:34:42.975689    5220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:34:43.019434    5220 ssh_runner.go:195] Run: openssl version
	I0219 04:34:43.045399    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:34:43.080345    5220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:34:43.087025    5220 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:34:43.098551    5220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:34:43.120688    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:34:43.152455    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:34:43.181454    5220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:43.188587    5220 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:43.196626    5220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:43.215894    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:34:43.254301    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:34:43.284862    5220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:34:43.291638    5220 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:34:43.301915    5220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:34:43.319650    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:34:43.336652    5220 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.26.1 ClusterName:force-systemd-flag-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.243.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:34:43.345293    5220 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:34:43.386711    5220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:34:43.410197    5220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:34:43.434079    5220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:34:43.449076    5220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:34:43.449165    5220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:34:43.692893    5220 kubeadm.go:322] W0219 04:34:43.682257    1496 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:34:44.264265    5220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:34:43.226025   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:43.965581   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:43.965581   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:43.965898   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:44.962563   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:44.962603   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:45.963293   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:46.659740   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:46.659945   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:46.659999   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:47.641677   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:47.641795   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:48.643900   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:49.373038   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:49.373365   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:49.373365   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:50.375445   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:50.375632   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:51.376536   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:52.069111   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:52.069386   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:52.069500   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:53.132219   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:53.132219   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:54.147392   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:54.903422   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:54.903422   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:54.903422   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:55.943601   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:55.943601   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:56.957757   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:57.701107   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:57.701107   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:57.701107   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:58.744699   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:58.744699   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:59.746232   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:00.462606   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:00.462606   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:00.462606   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:01.489784   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:01.489889   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:03.504636    5220 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0219 04:35:03.504636    5220 kubeadm.go:322] [preflight] Running pre-flight checks
	I0219 04:35:03.505615    5220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:35:03.506079    5220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:35:03.506436    5220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:35:03.506436    5220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:35:03.510211    5220 out.go:204]   - Generating certificates and keys ...
	I0219 04:35:03.510488    5220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0219 04:35:03.510812    5220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0219 04:35:03.511153    5220 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:35:03.511414    5220 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:35:03.511648    5220 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0219 04:35:03.511842    5220 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0219 04:35:03.511907    5220 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0219 04:35:03.512529    5220 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-928900 localhost] and IPs [172.28.243.54 127.0.0.1 ::1]
	I0219 04:35:03.512725    5220 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0219 04:35:03.513236    5220 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-928900 localhost] and IPs [172.28.243.54 127.0.0.1 ::1]
	I0219 04:35:03.513371    5220 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:35:03.513537    5220 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:35:03.513663    5220 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0219 04:35:03.513663    5220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:35:03.513663    5220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:35:03.514289    5220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:35:03.514289    5220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:35:03.514289    5220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:35:03.515052    5220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:35:03.515052    5220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:35:03.515584    5220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0219 04:35:03.515824    5220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:35:03.518413    5220 out.go:204]   - Booting up control plane ...
	I0219 04:35:03.519029    5220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:35:03.519029    5220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:35:03.519493    5220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:35:03.519723    5220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:35:03.520381    5220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:35:03.520428    5220 kubeadm.go:322] [apiclient] All control plane components are healthy after 14.004312 seconds
	I0219 04:35:03.520428    5220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:35:03.521474    5220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:35:03.521720    5220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:35:03.521897    5220 kubeadm.go:322] [mark-control-plane] Marking the node force-systemd-flag-928900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:35:03.521897    5220 kubeadm.go:322] [bootstrap-token] Using token: hrl8it.336vci6t8g26yai3
	I0219 04:35:03.525897    5220 out.go:204]   - Configuring RBAC rules ...
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:35:03.526134    5220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:35:03.526134    5220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0219 04:35:03.526134    5220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0219 04:35:03.526134    5220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:35:03.526134    5220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0219 04:35:03.526134    5220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:35:03.526134    5220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:35:03.526134    5220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hrl8it.336vci6t8g26yai3 \
	I0219 04:35:03.526134    5220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:35:03.526134    5220 kubeadm.go:322] 	--control-plane 
	I0219 04:35:03.529066    5220 kubeadm.go:322] 
	I0219 04:35:03.529066    5220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:35:03.529209    5220 kubeadm.go:322] 
	I0219 04:35:03.529339    5220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hrl8it.336vci6t8g26yai3 \
	I0219 04:35:03.529339    5220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:35:03.529339    5220 cni.go:84] Creating CNI manager for ""
	I0219 04:35:03.529339    5220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:35:03.533439    5220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:35:03.553854    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:35:03.596864    5220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:35:03.656270    5220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:35:03.668736    5220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:35:03.669994    5220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61 minikube.k8s.io/name=force-systemd-flag-928900 minikube.k8s.io/updated_at=2023_02_19T04_35_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:35:03.696829    5220 ops.go:34] apiserver oom_adj: -16
	I0219 04:35:04.290947    5220 kubeadm.go:1073] duration metric: took 634.626ms to wait for elevateKubeSystemPrivileges.
	I0219 04:35:04.328761    5220 kubeadm.go:403] StartCluster complete in 20.9921778s
	I0219 04:35:04.328872    5220 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:35:04.329102    5220 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:35:04.330729    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:35:04.332478    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:35:04.332657    5220 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:35:04.333015    5220 addons.go:65] Setting storage-provisioner=true in profile "force-systemd-flag-928900"
	I0219 04:35:04.333078    5220 addons.go:65] Setting default-storageclass=true in profile "force-systemd-flag-928900"
	I0219 04:35:04.333201    5220 config.go:182] Loaded profile config "force-systemd-flag-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:35:04.333140    5220 addons.go:227] Setting addon storage-provisioner=true in "force-systemd-flag-928900"
	I0219 04:35:04.333297    5220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-928900"
	I0219 04:35:04.333297    5220 host.go:66] Checking if "force-systemd-flag-928900" exists ...
	I0219 04:35:04.334042    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:04.334977    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:04.341301    5220 kapi.go:59] client config for force-systemd-flag-928900: &rest.Config{Host:"https://172.28.243.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil)
, KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:35:04.342293    5220 cert_rotation.go:137] Starting client certificate rotation controller
	I0219 04:35:04.556338    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0219 04:35:04.929561    5220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "force-systemd-flag-928900" context rescaled to 1 replicas
	I0219 04:35:04.929561    5220 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.243.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:35:04.936767    5220 out.go:177] * Verifying Kubernetes components...
	I0219 04:35:04.948444    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:35:05.141332    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.141381    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.141485    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.141485    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.145405    5220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:35:05.143702    5220 kapi.go:59] client config for force-systemd-flag-928900: &rest.Config{Host:"https://172.28.243.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil)
, KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:35:05.147647    5220 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:35:05.147647    5220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0219 04:35:05.147647    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:05.156052    5220 addons.go:227] Setting addon default-storageclass=true in "force-systemd-flag-928900"
	I0219 04:35:05.156052    5220 host.go:66] Checking if "force-systemd-flag-928900" exists ...
	I0219 04:35:05.157533    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.957384    5220 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0219 04:35:05.957384    5220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:06.047430    5220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.4910961s)
	I0219 04:35:06.047430    5220 start.go:921] {"host.minikube.internal": 172.28.240.1} host record injected into CoreDNS's ConfigMap
	I0219 04:35:06.047430    5220 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.0989897s)
	I0219 04:35:06.049419    5220 kapi.go:59] client config for force-systemd-flag-928900: &rest.Config{Host:"https://172.28.243.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil)
, KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:35:06.050469    5220 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:35:06.065425    5220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:35:06.102423    5220 api_server.go:71] duration metric: took 1.1719386s to wait for apiserver process to appear ...
	I0219 04:35:06.102423    5220 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:35:06.102423    5220 api_server.go:252] Checking apiserver healthz at https://172.28.243.54:8443/healthz ...
	I0219 04:35:06.126773    5220 api_server.go:278] https://172.28.243.54:8443/healthz returned 200:
	ok
	I0219 04:35:06.129531    5220 api_server.go:140] control plane version: v1.26.1
	I0219 04:35:06.129628    5220 api_server.go:130] duration metric: took 27.205ms to wait for apiserver health ...
	I0219 04:35:06.129628    5220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:35:06.138592    5220 system_pods.go:59] 4 kube-system pods found
	I0219 04:35:06.138592    5220 system_pods.go:61] "etcd-force-systemd-flag-928900" [2ff2300b-fa0a-42bf-b7ce-45d35c3953c3] Pending
	I0219 04:35:06.138592    5220 system_pods.go:61] "kube-apiserver-force-systemd-flag-928900" [20a7d0f0-3512-4fbb-8fdf-137c2eb9660f] Pending
	I0219 04:35:06.138592    5220 system_pods.go:61] "kube-controller-manager-force-systemd-flag-928900" [446ae01f-7e3a-45d4-996c-fdfa93864f49] Running
	I0219 04:35:06.138592    5220 system_pods.go:61] "kube-scheduler-force-systemd-flag-928900" [a8716c8a-b6bd-4e19-8a5e-103af1e47d69] Pending
	I0219 04:35:06.138592    5220 system_pods.go:74] duration metric: took 8.8446ms to wait for pod list to return data ...
	I0219 04:35:06.138592    5220 kubeadm.go:578] duration metric: took 1.2081075s to wait for : map[apiserver:true system_pods:true] ...
	I0219 04:35:06.138592    5220 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:35:06.142599    5220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:35:06.142599    5220 node_conditions.go:123] node cpu capacity is 2
	I0219 04:35:06.142599    5220 node_conditions.go:105] duration metric: took 4.0075ms to run NodePressure ...
	I0219 04:35:06.142599    5220 start.go:228] waiting for startup goroutines ...
	I0219 04:35:06.797916    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:06.797916    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:06.797916    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:07.151655    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:35:07.151837    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:07.152060    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:35:02.504114   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:03.235447   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:03.235494   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:03.235557   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:04.306185   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:04.306224   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.312517   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:06.147589   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:06.147860   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:06.148003   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:07.306178    5220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:35:07.923703    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:35:07.923756    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:07.923843    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:35:08.060537    5220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0219 04:35:08.366548    5220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0219 04:35:08.370099    5220 addons.go:492] enable addons completed in 4.037519s: enabled=[storage-provisioner default-storageclass]
	I0219 04:35:08.370140    5220 start.go:233] waiting for cluster config update ...
	I0219 04:35:08.370203    5220 start.go:242] writing updated cluster config ...
	I0219 04:35:08.382006    5220 ssh_runner.go:195] Run: rm -f paused
	I0219 04:35:08.571697    5220 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:35:08.574071    5220 out.go:177] 
	W0219 04:35:08.577048    5220 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:35:08.580398    5220 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:35:08.582869    5220 out.go:177] * Done! kubectl is now configured to use "force-systemd-flag-928900" cluster and "default" namespace by default
	I0219 04:35:07.325007   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:07.325100   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:07.325100   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:08.112980   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:08.112980   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:08.112980   11220 machine.go:88] provisioning docker machine ...
	I0219 04:35:08.112980   11220 buildroot.go:166] provisioning hostname "offline-docker-928900"
	I0219 04:35:08.112980   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:08.838290   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:08.838290   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:08.838290   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:09.910174   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:09.910294   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:09.917432   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:09.918161   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:09.918161   11220 main.go:141] libmachine: About to run SSH command:
	sudo hostname offline-docker-928900 && echo "offline-docker-928900" | sudo tee /etc/hostname
	I0219 04:35:10.097469   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-928900
	
	I0219 04:35:10.097677   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:10.844543   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:10.844543   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:10.844543   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:11.892794   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:11.893005   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:11.897354   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:11.898223   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:11.898223   11220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-928900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-928900/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-928900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:35:12.054424   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:35:12.054424   11220 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:35:12.054424   11220 buildroot.go:174] setting up certificates
	I0219 04:35:12.054424   11220 provision.go:83] configureAuth start
	I0219 04:35:12.054424   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:12.816554   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:12.816554   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:12.816554   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:13.878725   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:13.879128   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:13.879128   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:14.644427   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:14.644740   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:14.644740   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:15.727749   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:15.727995   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:15.728066   11220 provision.go:138] copyHostCerts
	I0219 04:35:15.728066   11220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:35:15.728066   11220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:35:15.728777   11220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:35:15.730146   11220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:35:15.730146   11220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:35:15.730459   11220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:35:15.731677   11220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:35:15.731677   11220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:35:15.732178   11220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:35:15.733165   11220 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.offline-docker-928900 san=[172.28.246.85 172.28.246.85 localhost 127.0.0.1 minikube offline-docker-928900]
	I0219 04:35:16.074222   11220 provision.go:172] copyRemoteCerts
	I0219 04:35:16.084723   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:35:16.085727   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:16.822104   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:16.822104   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:16.822104   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:17.899971   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:17.899971   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:17.899971   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:18.010136   11220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.925419s)
	I0219 04:35:18.010136   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:35:18.052943   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0219 04:35:18.095782   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:35:18.135601   11220 provision.go:86] duration metric: configureAuth took 6.081197s
	I0219 04:35:18.135601   11220 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:35:18.136351   11220 config.go:182] Loaded profile config "offline-docker-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:35:18.136351   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:18.878431   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:18.878661   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:18.878661   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:19.916555   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:19.916623   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:19.923731   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:19.924488   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:19.924488   11220 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:35:20.082546   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:35:20.082601   11220 buildroot.go:70] root file system type: tmpfs
	I0219 04:35:20.082601   11220 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:35:20.082601   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:20.819148   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:20.819148   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:20.819281   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:21.830798   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:21.830798   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:21.834751   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:21.836451   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:21.836451   11220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:35:22.016487   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:35:22.016585   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:22.733731   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:22.733731   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:22.733731   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:23.798958   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:23.798958   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:23.802678   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:23.803544   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:23.803544   11220 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:35:24.889781   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:35:24.889899   11220 machine.go:91] provisioned docker machine in 16.7769744s
	I0219 04:35:24.889899   11220 client.go:171] LocalClient.Create took 1m7.9074971s
	I0219 04:35:24.890006   11220 start.go:167] duration metric: libmachine.API.Create for "offline-docker-928900" took 1m7.9076042s
	I0219 04:35:24.890006   11220 start.go:300] post-start starting for "offline-docker-928900" (driver="hyperv")
	I0219 04:35:24.890006   11220 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:35:24.900188   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:35:24.900188   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:25.634984   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:25.634984   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:25.635074   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:26.681276   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:26.681473   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:26.681862   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:26.791898   11220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8917162s)
	I0219 04:35:26.802687   11220 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:35:26.808600   11220 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:35:26.808600   11220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:35:26.809262   11220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:35:26.810319   11220 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:35:26.822430   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:35:26.838434   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:35:26.881871   11220 start.go:303] post-start completed in 1.9918715s
	I0219 04:35:26.886058   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:30.693433    1628 start.go:368] acquired machines lock for "NoKubernetes-928900" in 2m20.9931914s
	I0219 04:35:30.693832    1628 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.26.1 ClusterName:NoKubernetes-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:35:30.693832    1628 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 04:35:27.595294   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:27.595294   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:27.595419   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:28.667384   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:28.667384   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:28.667978   11220 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\config.json ...
	I0219 04:35:28.670918   11220 start.go:128] duration metric: createHost completed in 1m11.6963636s
	I0219 04:35:28.670918   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:29.447502   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:29.447578   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:29.447578   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:30.544615   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:30.544615   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:30.549965   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:30.550553   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:30.550553   11220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:35:30.692713   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781330.691932700
	
	I0219 04:35:30.692713   11220 fix.go:207] guest clock: 1676781330.691932700
	I0219 04:35:30.692713   11220 fix.go:220] Guest: 2023-02-19 04:35:30.6919327 +0000 GMT Remote: 2023-02-19 04:35:28.6709188 +0000 GMT m=+141.623337201 (delta=2.0210139s)
	I0219 04:35:30.692713   11220 fix.go:191] guest clock delta is within tolerance: 2.0210139s
	I0219 04:35:30.692713   11220 start.go:83] releasing machines lock for "offline-docker-928900", held for 1m13.7181652s
	I0219 04:35:30.692713   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:31.450202   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:31.450347   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:31.450347   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:32.501176   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:32.501353   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:32.505075   11220 out.go:177] * Found network options:
	I0219 04:35:32.507809   11220 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	W0219 04:35:32.510280   11220 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.28.246.85).
	I0219 04:35:32.512846   11220 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I0219 04:35:32.515797   11220 out.go:177]   - http_proxy=172.16.1.1:1
	I0219 04:35:30.698130    1628 out.go:204] * Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0219 04:35:30.698381    1628 start.go:159] libmachine.API.Create for "NoKubernetes-928900" (driver="hyperv")
	I0219 04:35:30.698381    1628 client.go:168] LocalClient.Create starting
	I0219 04:35:30.699431    1628 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:35:30.699431    1628 main.go:141] libmachine: Decoding PEM data...
	I0219 04:35:30.699431    1628 main.go:141] libmachine: Parsing certificate...
	I0219 04:35:30.699971    1628 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:35:30.700126    1628 main.go:141] libmachine: Decoding PEM data...
	I0219 04:35:30.700175    1628 main.go:141] libmachine: Parsing certificate...
	I0219 04:35:30.700305    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:35:31.133048    1628 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:35:31.133048    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:31.133108    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:35:31.816609    1628 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:35:31.816609    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:31.816609    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:35:32.340962    1628 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:35:32.341152    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:32.341235    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:35:32.522805   11220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:35:32.522805   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:32.530797   11220 ssh_runner.go:195] Run: cat /version.json
	I0219 04:35:32.530797   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:33.323156   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:33.323316   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:33.323156   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:33.323392   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:33.323392   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:33.323465   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:34.485126   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:34.485126   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:34.485126   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:34.513879   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:34.513974   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:34.513974   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:34.585822   11220 ssh_runner.go:235] Completed: cat /version.json: (2.055032s)
	I0219 04:35:34.595846   11220 ssh_runner.go:195] Run: systemctl --version
	I0219 04:35:34.984675   11220 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.4617577s)
	I0219 04:35:34.997250   11220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:35:35.005275   11220 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:35:35.014480   11220 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:35:35.029556   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:35:35.044832   11220 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:35:35.087287   11220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:35:35.115786   11220 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:35:35.115786   11220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:35:35.124069   11220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:35:35.156726   11220 docker.go:630] Got preloaded images: 
	I0219 04:35:35.157267   11220 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:35:35.167983   11220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:35:35.194993   11220 ssh_runner.go:195] Run: which lz4
	I0219 04:35:35.211356   11220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:35:35.217502   11220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:35:35.217502   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:35:33.958832    1628 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:35:33.959023    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:33.960505    1628 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:35:34.364974    1628 main.go:141] libmachine: Creating SSH key...
	I0219 04:35:34.428453    1628 main.go:141] libmachine: Creating VM...
	I0219 04:35:34.428453    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:35:36.003094    1628 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:35:36.003094    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:36.003094    1628 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:35:36.003094    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:35:36.812510    1628 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:35:36.812510    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:36.812510    1628 main.go:141] libmachine: Creating VHD
	I0219 04:35:36.812569    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:35:37.692180   11220 docker.go:594] Took 2.490968 seconds to copy over tarball
	I0219 04:35:37.705175   11220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:35:38.606498    1628 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 80F9A0F7-385E-49A3-9B13-B1EF56610A8C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:35:38.606498    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:38.606498    1628 main.go:141] libmachine: Writing magic tar header
	I0219 04:35:38.606581    1628 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:35:38.614204    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 04:35:40.366151    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:40.366380    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:40.366380    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\disk.vhd' -SizeBytes 20000MB
	I0219 04:35:41.739426    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:41.739426    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:41.739601    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM NoKubernetes-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900' -SwitchName 'Default Switch' -MemoryStartupBytes 6000MB
	I0219 04:35:54.111297    1628 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	NoKubernetes-928900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 04:35:54.111488    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:54.111488    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName NoKubernetes-928900 -DynamicMemoryEnabled $false
	I0219 04:35:56.005276    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:56.005276    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:56.005276    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor NoKubernetes-928900 -Count 2
	I0219 04:35:57.237535    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:57.237535    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:57.237721    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName NoKubernetes-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\boot2docker.iso'
	I0219 04:35:58.217785   11220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (20.5125984s)
	I0219 04:35:58.217785   11220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:35:58.283473   11220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:35:58.302844   11220 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:35:58.345036   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:35:58.520324   11220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:36:01.970105   11220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4497246s)
	I0219 04:36:01.970105   11220 start.go:485] detecting cgroup driver to use...
	I0219 04:36:01.970105   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:36:02.021878   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:36:02.053631   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:36:02.072510   11220 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:36:02.084584   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:36:02.111086   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:36:02.138040   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:36:02.162879   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:36:02.189803   11220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:36:02.217142   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:36:02.250427   11220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:36:02.274633   11220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:35:58.953053    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:58.953053    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:58.953346    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName NoKubernetes-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\disk.vhd'
	I0219 04:36:01.306037    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:01.306037    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:01.306037    1628 main.go:141] libmachine: Starting VM...
	I0219 04:36:01.306219    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM NoKubernetes-928900
	I0219 04:36:02.306766   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:36:02.498195   11220 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:36:02.526827   11220 start.go:485] detecting cgroup driver to use...
	I0219 04:36:02.536683   11220 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:36:02.566411   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:36:02.602012   11220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:36:02.642865   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:36:02.671286   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:36:02.702194   11220 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:36:02.757697   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:36:02.786774   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:36:02.852480   11220 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:36:03.041749   11220 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:36:03.273126   11220 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:36:03.273126   11220 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:36:03.314943   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:36:03.493694   11220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:36:05.274393   11220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7807046s)
	I0219 04:36:05.283409   11220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:36:05.465362   11220 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:36:05.635613   11220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:36:05.821763   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:36:06.011012   11220 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:36:06.043667   11220 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:36:06.054039   11220 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:36:06.062177   11220 start.go:553] Will wait 60s for crictl version
	I0219 04:36:06.073966   11220 ssh_runner.go:195] Run: which crictl
	I0219 04:36:06.089701   11220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:36:06.231041   11220 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:36:06.239052   11220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:36:06.289016   11220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:36:06.341177   11220 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:36:06.344145   11220 out.go:177]   - env HTTP_PROXY=172.16.1.1:1
	I0219 04:36:06.346141   11220 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:36:06.352058   11220 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:36:06.352449   11220 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:36:06.352449   11220 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:36:06.352449   11220 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:36:06.355806   11220 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:36:06.355894   11220 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:36:06.365756   11220 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:36:06.371249   11220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:36:06.395978   11220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:36:06.403969   11220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:36:06.442236   11220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:36:06.442236   11220 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:36:06.449389   11220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:36:06.487149   11220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:36:06.487149   11220 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:36:06.495245   11220 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:36:06.554935   11220 cni.go:84] Creating CNI manager for ""
	I0219 04:36:06.554935   11220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:36:06.554935   11220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:36:06.554935   11220 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.246.85 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-docker-928900 NodeName:offline-docker-928900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.246.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.246.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:36:06.554935   11220 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.246.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "offline-docker-928900"
	  kubeletExtraArgs:
	    node-ip: 172.28.246.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.246.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:36:06.555540   11220 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=offline-docker-928900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.246.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:offline-docker-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:36:06.566785   11220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:36:06.583546   11220 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:36:06.595663   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:36:06.611289   11220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (454 bytes)
	I0219 04:36:06.641279   11220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:36:06.671272   11220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0219 04:36:06.709458   11220 ssh_runner.go:195] Run: grep 172.28.246.85	control-plane.minikube.internal$ /etc/hosts
	I0219 04:36:06.715860   11220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.246.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:36:06.738245   11220 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900 for IP: 172.28.246.85
	I0219 04:36:06.738245   11220 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:06.738907   11220 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:36:06.739686   11220 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:36:06.740574   11220 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.key
	I0219 04:36:06.740772   11220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.crt with IP's: []
	I0219 04:36:06.930006   11220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.crt ...
	I0219 04:36:06.930006   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.crt: {Name:mk9ca54252595f9ab11c6c82f374c57d36342abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:06.931022   11220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.key ...
	I0219 04:36:06.931022   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.key: {Name:mk0e4414a5871d3b645e354d2366f0664ccca23b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:06.932025   11220 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387
	I0219 04:36:06.932025   11220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387 with IP's: [172.28.246.85 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:36:03.123286    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:03.123286    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:03.123286    1628 main.go:141] libmachine: Waiting for host to start...
	I0219 04:36:03.123286    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:03.910176    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:03.910176    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:03.910176    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:05.010810    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:05.010810    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:06.012691    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:06.801290    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:06.801472    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:06.801472    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:07.917224   11220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387 ...
	I0219 04:36:07.917224   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387: {Name:mk93509684f4a9d638e1cf43deec82004cfa1638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:07.919346   11220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387 ...
	I0219 04:36:07.919346   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387: {Name:mk2c24ed1b4c1a4bb01d3cec89b4631b39d8ef05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:07.920904   11220 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt
	I0219 04:36:07.927475   11220 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key
	I0219 04:36:07.930819   11220 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key
	I0219 04:36:07.931814   11220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt with IP's: []
	I0219 04:36:08.113069   11220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt ...
	I0219 04:36:08.113069   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt: {Name:mk1eff50f027c2b1736311380e60653f8bfc71fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:08.114050   11220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key ...
	I0219 04:36:08.114050   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key: {Name:mk589158bee47663d12b4833876cb313f0d0cd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:08.123091   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:36:08.123091   11220 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:36:08.123091   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:36:08.123091   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:36:08.124058   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:36:08.124058   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:36:08.124058   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:36:08.126057   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:36:08.176135   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:36:08.217781   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:36:08.260107   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0219 04:36:08.300771   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:36:08.339199   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:36:08.381665   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:36:08.427541   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:36:08.472155   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:36:08.511681   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:36:08.552763   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:36:08.597594   11220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:36:08.636207   11220 ssh_runner.go:195] Run: openssl version
	I0219 04:36:08.656811   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:36:08.685799   11220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:36:08.692398   11220 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:36:08.705378   11220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:36:08.724400   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:36:08.763513   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:36:08.791129   11220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:36:08.797529   11220 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:36:08.805496   11220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:36:08.824303   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:36:08.853165   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:36:08.879761   11220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:36:08.886311   11220 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:36:08.896968   11220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:36:08.915134   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:36:08.940900   11220 kubeadm.go:401] StartCluster: {Name:offline-docker-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.26.1 ClusterName:offline-docker-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.85 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:36:08.953766   11220 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:36:09.001977   11220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:36:09.031450   11220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:36:09.074743   11220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:36:09.096341   11220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:36:09.096429   11220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:36:09.506617   11220 kubeadm.go:322] W0219 04:36:09.484983    1499 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:36:10.493226   11220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:36:07.850666    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:07.850666    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:08.854763    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:09.670468    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:09.670468    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:09.670468    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:10.967024    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:10.967024    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:11.981169    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:12.847905    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:12.847905    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:12.847905    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:14.205597    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:14.205597    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:15.217915    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:15.967523    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:15.967523    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:15.967523    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:17.064839    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:17.064839    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:18.064994    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:18.842019    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:18.842019    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:18.842019    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:19.933285    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:19.933285    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:20.934998    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:21.726156    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:21.726321    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:21.726321    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:22.802749    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:22.803031    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:23.818547    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:24.595861    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:24.595861    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:24.595861    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:25.671428    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:25.671428    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:26.676626    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:27.498068    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:27.498257    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:27.498341    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:31.465750   11220 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0219 04:36:31.465750   11220 kubeadm.go:322] [preflight] Running pre-flight checks
	I0219 04:36:31.466473   11220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:36:31.466660   11220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:36:31.466905   11220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:36:31.467036   11220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:36:31.469558   11220 out.go:204]   - Generating certificates and keys ...
	I0219 04:36:31.469638   11220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0219 04:36:31.469638   11220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0219 04:36:31.470163   11220 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:36:31.470408   11220 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:36:31.470734   11220 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0219 04:36:31.470734   11220 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0219 04:36:31.470734   11220 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0219 04:36:31.471331   11220 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost offline-docker-928900] and IPs [172.28.246.85 127.0.0.1 ::1]
	I0219 04:36:31.471331   11220 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0219 04:36:31.472247   11220 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost offline-docker-928900] and IPs [172.28.246.85 127.0.0.1 ::1]
	I0219 04:36:31.472594   11220 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:36:31.472916   11220 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:36:31.473199   11220 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0219 04:36:31.473470   11220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:36:31.473523   11220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:36:31.473825   11220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:36:31.474089   11220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:36:31.474372   11220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:36:31.474794   11220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:36:31.474869   11220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:36:31.474869   11220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0219 04:36:31.475400   11220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:36:31.479149   11220 out.go:204]   - Booting up control plane ...
	I0219 04:36:31.479814   11220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:36:31.480073   11220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:36:31.480401   11220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:36:31.480401   11220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:36:31.481159   11220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:36:31.481159   11220 kubeadm.go:322] [apiclient] All control plane components are healthy after 15.007700 seconds
	I0219 04:36:31.481800   11220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:36:31.482055   11220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:36:31.482055   11220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:36:31.482811   11220 kubeadm.go:322] [mark-control-plane] Marking the node offline-docker-928900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:36:31.482811   11220 kubeadm.go:322] [bootstrap-token] Using token: wvfirw.1xer41cq7lm85lnh
	I0219 04:36:31.489756   11220 out.go:204]   - Configuring RBAC rules ...
	I0219 04:36:31.490751   11220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:36:31.490751   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:36:31.490751   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:36:31.491748   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:36:31.491748   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:36:31.492832   11220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:36:31.492832   11220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:36:31.492832   11220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0219 04:36:31.493750   11220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0219 04:36:31.493750   11220 kubeadm.go:322] 
	I0219 04:36:31.493750   11220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0219 04:36:31.493750   11220 kubeadm.go:322] 
	I0219 04:36:31.493750   11220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0219 04:36:31.493750   11220 kubeadm.go:322] 
	I0219 04:36:31.493750   11220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0219 04:36:31.494751   11220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:36:31.494751   11220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:36:31.494751   11220 kubeadm.go:322] 
	I0219 04:36:31.494751   11220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0219 04:36:31.494751   11220 kubeadm.go:322] 
	I0219 04:36:31.495768   11220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:36:31.495768   11220 kubeadm.go:322] 
	I0219 04:36:31.495768   11220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0219 04:36:31.495768   11220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:36:31.495768   11220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:36:31.495768   11220 kubeadm.go:322] 
	I0219 04:36:31.496785   11220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:36:31.496785   11220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0219 04:36:31.496785   11220 kubeadm.go:322] 
	I0219 04:36:31.496785   11220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wvfirw.1xer41cq7lm85lnh \
	I0219 04:36:31.497754   11220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:36:31.497754   11220 kubeadm.go:322] 	--control-plane 
	I0219 04:36:31.497754   11220 kubeadm.go:322] 
	I0219 04:36:31.497754   11220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:36:31.497754   11220 kubeadm.go:322] 
	I0219 04:36:31.497754   11220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wvfirw.1xer41cq7lm85lnh \
	I0219 04:36:31.498763   11220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:36:31.498763   11220 cni.go:84] Creating CNI manager for ""
	I0219 04:36:31.498763   11220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:36:31.506855   11220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:36:31.522055   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:36:31.540268   11220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:36:31.572420   11220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:36:31.585992   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61 minikube.k8s.io/name=offline-docker-928900 minikube.k8s.io/updated_at=2023_02_19T04_36_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:31.590822   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:31.672376   11220 ops.go:34] apiserver oom_adj: -16
	I0219 04:36:28.606489    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:28.606489    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:29.609763    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:30.415509    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:30.415509    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:30.415509    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:31.600639    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:31.600639    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:31.600705    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:32.455225    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:32.455225    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:32.455225    1628 machine.go:88] provisioning docker machine ...
	I0219 04:36:32.455225    1628 buildroot.go:166] provisioning hostname "NoKubernetes-928900"
	I0219 04:36:32.457454    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:32.439818   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:33.131395   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:33.620714   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:34.134628   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:34.621604   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:35.135732   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:35.626092   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:36.129272   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:36.634957   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:37.122722   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:33.294759    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:33.294759    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:33.294759    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:34.391277    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:34.391277    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:34.394279    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:34.403631    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:34.403631    1628 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-928900 && echo "NoKubernetes-928900" | sudo tee /etc/hostname
	I0219 04:36:34.580535    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-928900
	
	I0219 04:36:34.580535    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:35.374253    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:35.374253    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:35.374253    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:36.480831    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:36.480960    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:36.485981    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:36.486627    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:36.486627    1628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-928900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-928900/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-928900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:36:36.648991    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:36:36.648991    1628 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:36:36.648991    1628 buildroot.go:174] setting up certificates
	I0219 04:36:36.648991    1628 provision.go:83] configureAuth start
	I0219 04:36:36.648991    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:37.413706    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:37.413706    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:37.413881    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:37.620947   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:38.129032   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:38.637292   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:39.123831   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:39.627231   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:40.136230   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:40.622407   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:41.129420   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:41.625645   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:42.128795   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:38.560772    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:38.560772    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:38.560772    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:39.351933    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:39.351933    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:39.352303    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:40.455817    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:40.455817    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:40.455817    1628 provision.go:138] copyHostCerts
	I0219 04:36:40.455817    1628 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:36:40.455817    1628 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:36:40.456603    1628 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:36:40.459691    1628 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:36:40.459691    1628 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:36:40.460060    1628 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:36:40.460704    1628 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:36:40.460704    1628 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:36:40.460704    1628 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:36:40.462697    1628 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.NoKubernetes-928900 san=[172.28.255.137 172.28.255.137 localhost 127.0.0.1 minikube NoKubernetes-928900]
	I0219 04:36:40.782474    1628 provision.go:172] copyRemoteCerts
	I0219 04:36:40.792470    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:36:40.792470    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:41.580590    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:41.580590    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:41.580804    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:42.623149   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:43.126579   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:43.634544   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:43.893360   11220 kubeadm.go:1073] duration metric: took 12.3208473s to wait for elevateKubeSystemPrivileges.
	I0219 04:36:43.893360   11220 kubeadm.go:403] StartCluster complete in 34.9525774s
	I0219 04:36:43.893360   11220 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:43.893360   11220 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:36:43.895358   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:43.896386   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:36:43.896386   11220 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:36:43.896386   11220 addons.go:65] Setting storage-provisioner=true in profile "offline-docker-928900"
	I0219 04:36:43.896386   11220 addons.go:65] Setting default-storageclass=true in profile "offline-docker-928900"
	I0219 04:36:43.896386   11220 addons.go:227] Setting addon storage-provisioner=true in "offline-docker-928900"
	I0219 04:36:43.897368   11220 config.go:182] Loaded profile config "offline-docker-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:36:43.897368   11220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-docker-928900"
	I0219 04:36:43.897368   11220 host.go:66] Checking if "offline-docker-928900" exists ...
	I0219 04:36:43.897368   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:43.899314   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:43.906164   11220 kapi.go:59] client config for offline-docker-928900: &rest.Config{Host:"https://172.28.246.85:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:36:44.397747   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0219 04:36:44.518124   11220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "offline-docker-928900" context rescaled to 1 replicas
	I0219 04:36:44.518124   11220 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.246.85 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:36:44.523483   11220 out.go:177] * Verifying Kubernetes components...
	I0219 04:36:44.536233   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:44.784650   11220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:36:44.782634   11220 kapi.go:59] client config for offline-docker-928900: &rest.Config{Host:"https://172.28.246.85:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:36:44.787632   11220 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:36:44.787632   11220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0219 04:36:44.787632   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:44.793614   11220 addons.go:227] Setting addon default-storageclass=true in "offline-docker-928900"
	I0219 04:36:44.793614   11220 host.go:66] Checking if "offline-docker-928900" exists ...
	I0219 04:36:44.795617   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:45.590116   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:45.590223   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:45.590116   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:45.590223   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:45.590308   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:45.590450   11220 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0219 04:36:45.590450   11220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0219 04:36:45.590450   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:46.399579   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:46.399579   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:46.399579   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:46.525746   11220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.1269427s)
	I0219 04:36:46.525746   11220 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.9895196s)
	I0219 04:36:46.525746   11220 start.go:921] {"host.minikube.internal": 172.28.240.1} host record injected into CoreDNS's ConfigMap
	I0219 04:36:46.528082   11220 kapi.go:59] client config for offline-docker-928900: &rest.Config{Host:"https://172.28.246.85:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:36:46.529064   11220 node_ready.go:35] waiting up to 6m0s for node "offline-docker-928900" to be "Ready" ...
	I0219 04:36:46.544140   11220 node_ready.go:49] node "offline-docker-928900" has status "Ready":"True"
	I0219 04:36:46.544269   11220 node_ready.go:38] duration metric: took 15.205ms waiting for node "offline-docker-928900" to be "Ready" ...
	I0219 04:36:46.544353   11220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:36:46.565202   11220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-d9lnt" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:46.839991   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:36:46.839991   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:46.839991   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:36:47.009280   11220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:36:42.658229    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:42.658229    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:42.658763    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:36:42.769335    1628 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9768723s)
	I0219 04:36:42.770315    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:36:42.813550    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0219 04:36:42.858261    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 04:36:42.907878    1628 provision.go:86] duration metric: configureAuth took 6.2589075s
	I0219 04:36:42.907878    1628 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:36:42.908875    1628 config.go:182] Loaded profile config "NoKubernetes-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:36:42.908875    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:43.686664    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:43.686664    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:43.686759    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:44.907768    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:44.907768    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:44.913686    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:44.914692    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:44.914692    1628 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:36:45.073886    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:36:45.073958    1628 buildroot.go:70] root file system type: tmpfs
	I0219 04:36:45.074098    1628 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:36:45.074169    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:45.858071    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:45.858281    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:45.858281    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:47.076345    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:47.076345    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:47.080342    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:47.081379    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:47.081379    1628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:36:47.262674    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:36:47.262674    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:47.591246   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:36:47.591246   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:47.591246   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:36:47.803357   11220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0219 04:36:48.102068   11220 pod_ready.go:92] pod "coredns-787d4945fb-d9lnt" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.102068   11220 pod_ready.go:81] duration metric: took 1.5368713s waiting for pod "coredns-787d4945fb-d9lnt" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.102068   11220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ltwwh" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.111096   11220 pod_ready.go:92] pod "coredns-787d4945fb-ltwwh" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.111096   11220 pod_ready.go:81] duration metric: took 9.0281ms waiting for pod "coredns-787d4945fb-ltwwh" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.111096   11220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.130834   11220 pod_ready.go:92] pod "etcd-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.130834   11220 pod_ready.go:81] duration metric: took 19.7382ms waiting for pod "etcd-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.130834   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.137739   11220 pod_ready.go:92] pod "kube-apiserver-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.137739   11220 pod_ready.go:81] duration metric: took 6.9052ms waiting for pod "kube-apiserver-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.137739   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.145479   11220 pod_ready.go:92] pod "kube-controller-manager-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.145479   11220 pod_ready.go:81] duration metric: took 7.7398ms waiting for pod "kube-controller-manager-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.145559   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zqzc7" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.370309   11220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0219 04:36:48.372825   11220 addons.go:492] enable addons completed in 4.4764542s: enabled=[storage-provisioner default-storageclass]
	I0219 04:36:48.535551   11220 pod_ready.go:92] pod "kube-proxy-zqzc7" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.535551   11220 pod_ready.go:81] duration metric: took 389.9933ms waiting for pod "kube-proxy-zqzc7" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.535551   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.953444   11220 pod_ready.go:92] pod "kube-scheduler-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.953444   11220 pod_ready.go:81] duration metric: took 417.8945ms waiting for pod "kube-scheduler-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.953444   11220 pod_ready.go:38] duration metric: took 2.4090566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:36:48.953444   11220 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:36:48.963338   11220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:36:48.991941   11220 api_server.go:71] duration metric: took 4.473832s to wait for apiserver process to appear ...
	I0219 04:36:48.992061   11220 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:36:48.992061   11220 api_server.go:252] Checking apiserver healthz at https://172.28.246.85:8443/healthz ...
	I0219 04:36:49.001975   11220 api_server.go:278] https://172.28.246.85:8443/healthz returned 200:
	ok
	I0219 04:36:49.004699   11220 api_server.go:140] control plane version: v1.26.1
	I0219 04:36:49.004779   11220 api_server.go:130] duration metric: took 12.7177ms to wait for apiserver health ...
	I0219 04:36:49.004779   11220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:36:49.147993   11220 system_pods.go:59] 8 kube-system pods found
	I0219 04:36:49.147993   11220 system_pods.go:61] "coredns-787d4945fb-d9lnt" [fba5029c-6a1e-4867-96aa-38252b508dcd] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "coredns-787d4945fb-ltwwh" [bd9e7528-e4e0-455d-943d-9d30d2c4f86a] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "etcd-offline-docker-928900" [988e3e49-cef8-4af7-9d68-ebfaa37fcddd] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-apiserver-offline-docker-928900" [8d372eea-8522-43a1-b53d-242e612d7574] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-controller-manager-offline-docker-928900" [73b4425d-b1c2-4191-b12c-22a69cfbfe7c] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-proxy-zqzc7" [67168779-5ccc-4fdc-be85-5e920523a686] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-scheduler-offline-docker-928900" [c2354ae5-928e-448c-b4cd-6c850d4431c8] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "storage-provisioner" [2bc544d2-4233-4227-8e89-d1a6dc59d23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0219 04:36:49.147993   11220 system_pods.go:74] duration metric: took 143.2151ms to wait for pod list to return data ...
	I0219 04:36:49.147993   11220 default_sa.go:34] waiting for default service account to be created ...
	I0219 04:36:49.339376   11220 default_sa.go:45] found service account: "default"
	I0219 04:36:49.339376   11220 default_sa.go:55] duration metric: took 191.3834ms for default service account to be created ...
	I0219 04:36:49.339376   11220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0219 04:36:49.549170   11220 system_pods.go:86] 8 kube-system pods found
	I0219 04:36:49.549170   11220 system_pods.go:89] "coredns-787d4945fb-d9lnt" [fba5029c-6a1e-4867-96aa-38252b508dcd] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "coredns-787d4945fb-ltwwh" [bd9e7528-e4e0-455d-943d-9d30d2c4f86a] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "etcd-offline-docker-928900" [988e3e49-cef8-4af7-9d68-ebfaa37fcddd] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-apiserver-offline-docker-928900" [8d372eea-8522-43a1-b53d-242e612d7574] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-controller-manager-offline-docker-928900" [73b4425d-b1c2-4191-b12c-22a69cfbfe7c] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-proxy-zqzc7" [67168779-5ccc-4fdc-be85-5e920523a686] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-scheduler-offline-docker-928900" [c2354ae5-928e-448c-b4cd-6c850d4431c8] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "storage-provisioner" [2bc544d2-4233-4227-8e89-d1a6dc59d23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0219 04:36:49.549170   11220 system_pods.go:126] duration metric: took 209.7943ms to wait for k8s-apps to be running ...
	I0219 04:36:49.549170   11220 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:36:49.559153   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:36:49.587740   11220 system_svc.go:56] duration metric: took 38.5708ms WaitForService to wait for kubelet.
	I0219 04:36:49.587740   11220 kubeadm.go:578] duration metric: took 5.0696334s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:36:49.587740   11220 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:36:49.734758   11220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:36:49.734852   11220 node_conditions.go:123] node cpu capacity is 2
	I0219 04:36:49.734852   11220 node_conditions.go:105] duration metric: took 147.1123ms to run NodePressure ...
	I0219 04:36:49.734852   11220 start.go:228] waiting for startup goroutines ...
	I0219 04:36:49.734852   11220 start.go:233] waiting for cluster config update ...
	I0219 04:36:49.734951   11220 start.go:242] writing updated cluster config ...
	I0219 04:36:49.744234   11220 ssh_runner.go:195] Run: rm -f paused
	I0219 04:36:49.949509   11220 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:36:49.992031   11220 out.go:177] 
	W0219 04:36:49.995998   11220 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:36:50.000021   11220 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:36:50.006022   11220 out.go:177] * Done! kubectl is now configured to use "offline-docker-928900" cluster and "default" namespace by default
	I0219 04:36:48.071182    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:48.071356    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:48.071388    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:49.194843    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:49.194843    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:49.199841    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:49.200527    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:49.200527    1628 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:36:50.486710    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:36:50.487666    1628 machine.go:91] provisioned docker machine in 18.0325024s
	I0219 04:36:50.487666    1628 client.go:171] LocalClient.Create took 1m19.7895518s
	I0219 04:36:50.487666    1628 start.go:167] duration metric: libmachine.API.Create for "NoKubernetes-928900" took 1m19.7895518s
	I0219 04:36:50.487666    1628 start.go:300] post-start starting for "NoKubernetes-928900" (driver="hyperv")
	I0219 04:36:50.487666    1628 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:36:50.495660    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:36:50.495660    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:51.271120    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:51.271120    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:51.271304    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:52.439368    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:52.439368    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:52.440175    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:36:52.552168    1628 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0564221s)
	I0219 04:36:52.561959    1628 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:36:52.569342    1628 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:36:52.569342    1628 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:36:52.569747    1628 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:36:52.570760    1628 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:36:52.580680    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:36:52.598251    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:36:56.647707    8340 start.go:368] acquired machines lock for "kubernetes-upgrade-803700" in 3m30.139618s
	I0219 04:36:56.648025    8340 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-803700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-803700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:36:56.648386    8340 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 04:36:52.646421    1628 start.go:303] post-start completed in 2.1587622s
	I0219 04:36:52.655267    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:53.486600    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:53.486600    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:53.486600    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:54.596026    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:54.596026    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:54.596026    1628 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\config.json ...
	I0219 04:36:54.598669    1628 start.go:128] duration metric: createHost completed in 1m23.9051162s
	I0219 04:36:54.598669    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:55.373522    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:55.373522    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:55.373522    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:56.499574    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:56.499618    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:56.504920    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:56.505603    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:56.505603    1628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:36:56.647079    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781416.637518960
	
	I0219 04:36:56.647079    1628 fix.go:207] guest clock: 1676781416.637518960
	I0219 04:36:56.647079    1628 fix.go:220] Guest: 2023-02-19 04:36:56.63751896 +0000 GMT Remote: 2023-02-19 04:36:54.598669 +0000 GMT m=+227.173683401 (delta=2.03884996s)
	I0219 04:36:56.647079    1628 fix.go:191] guest clock delta is within tolerance: 2.03884996s
	I0219 04:36:56.647079    1628 start.go:83] releasing machines lock for "NoKubernetes-928900", held for 1m25.9538003s
	I0219 04:36:56.647618    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:57.464036    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:57.464036    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:57.464036    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:56.651190    8340 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0219 04:36:56.651885    8340 start.go:159] libmachine.API.Create for "kubernetes-upgrade-803700" (driver="hyperv")
	I0219 04:36:56.651885    8340 client.go:168] LocalClient.Create starting
	I0219 04:36:56.652541    8340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:36:56.652541    8340 main.go:141] libmachine: Decoding PEM data...
	I0219 04:36:56.652541    8340 main.go:141] libmachine: Parsing certificate...
	I0219 04:36:56.653161    8340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:36:56.653462    8340 main.go:141] libmachine: Decoding PEM data...
	I0219 04:36:56.653534    8340 main.go:141] libmachine: Parsing certificate...
	I0219 04:36:56.653765    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:36:57.099213    8340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:36:57.099213    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:57.099213    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:36:57.924494    8340 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:36:57.924494    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:57.924494    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:36:58.649696    8340 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:36:58.649750    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:58.649750    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:36:58.664982    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:58.664982    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:58.668677    1628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:36:58.668677    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:58.684844    1628 ssh_runner.go:195] Run: cat /version.json
	I0219 04:36:58.685045    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:59.487691    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:59.487691    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:59.487894    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:59.518357    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:59.518536    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:59.518536    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:00.648142    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:37:00.648142    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:00.648142    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:37:00.692089    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:37:00.692089    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:00.692089    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:37:00.804310    1628 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1356402s)
	I0219 04:37:00.805186    1628 ssh_runner.go:235] Completed: cat /version.json: (2.1194733s)
	I0219 04:37:00.815215    1628 ssh_runner.go:195] Run: systemctl --version
	I0219 04:37:00.834234    1628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:37:00.842204    1628 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:37:00.853208    1628 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:37:00.870214    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:37:00.885206    1628 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:37:00.926305    1628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:37:00.952925    1628 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:37:00.952925    1628 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:37:00.961040    1628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:37:00.993350    1628 docker.go:630] Got preloaded images: 
	I0219 04:37:00.993350    1628 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:37:01.003872    1628 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:37:01.040226    1628 ssh_runner.go:195] Run: which lz4
	I0219 04:37:01.058591    1628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:37:01.065182    1628 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:37:01.065182    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:37:00.345219    8340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:37:00.345296    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:00.348335    8340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:37:00.794119    8340 main.go:141] libmachine: Creating SSH key...
	I0219 04:37:01.187364    8340 main.go:141] libmachine: Creating VM...
	I0219 04:37:01.187364    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:37:02.923434    8340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:37:02.923434    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:02.923434    8340 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:37:02.923434    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:37:03.697060    8340 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:37:03.697271    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:03.697271    8340 main.go:141] libmachine: Creating VHD
	I0219 04:37:03.697271    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:37:03.468097    1628 docker.go:594] Took 2.421331 seconds to copy over tarball
	I0219 04:37:03.480708    1628 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:37:05.457567    8340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 975DD022-8CE0-4848-BAEA-C5005FE04769
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:37:05.457648    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:05.457648    8340 main.go:141] libmachine: Writing magic tar header
	I0219 04:37:05.457795    8340 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:37:05.465621    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 04:37:07.221221    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:07.221309    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:07.221309    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\disk.vhd' -SizeBytes 20000MB
	I0219 04:37:08.586944    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:08.586944    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:08.586944    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kubernetes-upgrade-803700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0219 04:37:09.050270    1628 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.5695807s)
	I0219 04:37:09.050270    1628 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:37:09.121999    1628 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:37:09.140548    1628 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:37:09.188599    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:09.367337    1628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:37:17.064405    8340 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	kubernetes-upgrade-803700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 04:37:17.064655    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:17.064655    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kubernetes-upgrade-803700 -DynamicMemoryEnabled $false
	I0219 04:37:19.543677    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:19.543677    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:19.543677    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kubernetes-upgrade-803700 -Count 2
	I0219 04:37:20.880531    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:20.880739    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:20.880739    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kubernetes-upgrade-803700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\boot2docker.iso'
	I0219 04:37:22.644910    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:22.644969    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:22.645108    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kubernetes-upgrade-803700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\disk.vhd'
	I0219 04:37:24.659450    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:24.659450    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:24.659450    8340 main.go:141] libmachine: Starting VM...
	I0219 04:37:24.659450    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-803700
	I0219 04:37:24.628591    1628 ssh_runner.go:235] Completed: sudo systemctl restart docker: (15.2613054s)
	I0219 04:37:24.628591    1628 start.go:485] detecting cgroup driver to use...
	I0219 04:37:24.629213    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:37:24.674895    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:37:24.701527    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:37:24.720798    1628 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:37:24.731091    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:37:24.771408    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:37:24.797201    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:37:24.824448    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:37:24.851926    1628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:37:24.888791    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:37:24.923523    1628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:37:24.951751    1628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:37:24.978693    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:25.162617    1628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:37:25.189664    1628 start.go:485] detecting cgroup driver to use...
	I0219 04:37:25.202622    1628 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:37:25.230484    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:37:25.261282    1628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:37:25.555559    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:37:25.591953    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:37:25.625610    1628 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:37:26.149198    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:37:26.170790    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:37:26.212155    1628 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:37:26.404697    1628 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:37:26.558780    1628 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:37:26.558780    1628 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:37:26.599150    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:26.766950    1628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:37:30.724642    1628 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.9576501s)
	I0219 04:37:30.735848    1628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:37:30.922242    1628 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:37:31.113265    1628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:37:31.299060    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:31.501888    1628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:37:31.528489    1628 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:37:31.539479    1628 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:37:31.548961    1628 start.go:553] Will wait 60s for crictl version
	I0219 04:37:31.562440    1628 ssh_runner.go:195] Run: which crictl
	I0219 04:37:31.579548    1628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:37:31.727273    1628 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:37:31.739311    1628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:37:31.799700    1628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:37:31.898372    1628 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:37:31.898612    1628 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:37:31.910181    1628 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:37:31.910181    1628 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:37:31.919952    1628 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:37:31.931074    1628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:37:31.953465    1628 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:37:31.963136    1628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:37:31.999666    1628 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:37:31.999666    1628 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:37:32.009899    1628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:37:32.056067    1628 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:37:32.056067    1628 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:37:32.067744    1628 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:37:32.115770    1628 cni.go:84] Creating CNI manager for ""
	I0219 04:37:32.115770    1628 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:37:32.115770    1628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:37:32.115770    1628 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.255.137 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-928900 NodeName:NoKubernetes-928900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.255.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.255.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:37:32.116511    1628 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.255.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "NoKubernetes-928900"
	  kubeletExtraArgs:
	    node-ip: 172.28.255.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.255.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:37:32.116627    1628 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=NoKubernetes-928900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.255.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:NoKubernetes-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:37:32.127400    1628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:37:32.145360    1628 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:37:32.158544    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:37:32.174355    1628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0219 04:37:32.209372    1628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:37:32.248362    1628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0219 04:37:32.289839    1628 ssh_runner.go:195] Run: grep 172.28.255.137	control-plane.minikube.internal$ /etc/hosts
	I0219 04:37:32.297539    1628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.255.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:37:32.318400    1628 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900 for IP: 172.28.255.137
	I0219 04:37:32.318400    1628 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.319163    1628 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:37:32.319641    1628 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:37:32.320513    1628 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.key
	I0219 04:37:32.320621    1628 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.crt with IP's: []
	I0219 04:37:32.464269    1628 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.crt ...
	I0219 04:37:32.464269    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.crt: {Name:mk6cc113d2a062338f6e681513431cb781d6a7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.465264    1628 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.key ...
	I0219 04:37:32.465264    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.key: {Name:mkef6e93b430868dc5093548d33ca3e4d0a289fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.466261    1628 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9
	I0219 04:37:32.466261    1628 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9 with IP's: [172.28.255.137 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:37:32.629628    1628 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9 ...
	I0219 04:37:32.629628    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9: {Name:mkf98e0a330613f0be8480d7b68a27359b8057b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.630594    1628 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9 ...
	I0219 04:37:32.630594    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9: {Name:mkd0c6c99b6db8f496a233c0df63fb6a8948c44c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.631606    1628 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt
	I0219 04:37:32.639615    1628 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key
	I0219 04:37:32.640585    1628 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key
	I0219 04:37:32.640585    1628 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt with IP's: []
	I0219 04:37:30.592066    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:30.592066    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:30.592138    8340 main.go:141] libmachine: Waiting for host to start...
	I0219 04:37:30.592138    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:31.382108    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:31.382108    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:31.382108    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:32.531399    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:32.531764    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:33.537663    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:34.342583    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:34.342633    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:34.344808    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:32.934797    1628 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt ...
	I0219 04:37:32.934797    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt: {Name:mkb4c19d7e11497f37a890f0a667d7636568d7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.935748    1628 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key ...
	I0219 04:37:32.935748    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key: {Name:mk4e87d34b0ecc25384ba56b30764edd79efb68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.945823    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:37:32.945823    1628 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:37:32.945823    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:37:32.948771    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:37:32.994067    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:37:33.035284    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:37:33.076545    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0219 04:37:33.117811    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:37:33.156456    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:37:33.203567    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:37:33.243851    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:37:33.281275    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:37:33.320512    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:37:33.361082    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:37:33.405874    1628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:37:33.448887    1628 ssh_runner.go:195] Run: openssl version
	I0219 04:37:33.468712    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:37:33.500769    1628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:37:33.508218    1628 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:37:33.518401    1628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:37:33.535848    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:37:33.574360    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:37:33.606715    1628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:37:33.613571    1628 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:37:33.625225    1628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:37:33.647837    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:37:33.680779    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:37:33.709909    1628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:37:33.716302    1628 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:37:33.725462    1628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:37:33.744119    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:37:33.765068    1628 kubeadm.go:401] StartCluster: {Name:NoKubernetes-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.26.1 ClusterName:NoKubernetes-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.255.137 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:37:33.773417    1628 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:37:33.818579    1628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:37:33.846547    1628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:37:33.872896    1628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:37:33.888442    1628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:37:33.888545    1628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:37:33.974794    1628 kubeadm.go:322] W0219 04:37:33.960482    1497 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:37:34.213097    1628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:37:35.455602    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:35.455602    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:36.469508    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:37.248338    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:37.248338    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:37.248338    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:38.387577    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:38.387630    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:39.389005    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:40.215420    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:40.215420    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:40.215420    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:41.338044    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:41.338150    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:42.339649    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:43.144904    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:43.144904    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:43.145155    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:44.256294    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:44.256448    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:45.260435    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:46.067751    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:46.067869    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:46.067919    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:47.178901    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:47.178901    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:48.191176    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:48.994563    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:48.994563    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:48.994820    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:50.099723    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:50.099779    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:51.102150    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:51.910343    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:51.910645    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:51.910722    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:53.078829    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:53.078906    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:54.079708    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:55.547371    1628 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0219 04:37:55.547371    1628 kubeadm.go:322] [preflight] Running pre-flight checks
	I0219 04:37:55.547926    1628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:37:55.548081    1628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:37:55.548392    1628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:37:55.548569    1628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:37:55.555823    1628 out.go:204]   - Generating certificates and keys ...
	I0219 04:37:55.556044    1628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0219 04:37:55.556044    1628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0219 04:37:55.556602    1628 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:37:55.556742    1628 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:37:55.556796    1628 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0219 04:37:55.556796    1628 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0219 04:37:55.556796    1628 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0219 04:37:55.557667    1628 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost nokubernetes-928900] and IPs [172.28.255.137 127.0.0.1 ::1]
	I0219 04:37:55.557667    1628 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0219 04:37:55.558319    1628 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost nokubernetes-928900] and IPs [172.28.255.137 127.0.0.1 ::1]
	I0219 04:37:55.558477    1628 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:37:55.558477    1628 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:37:55.559038    1628 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0219 04:37:55.559253    1628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:37:55.559499    1628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:37:55.559632    1628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:37:55.559632    1628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:37:55.559632    1628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:37:55.560316    1628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:37:55.560364    1628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:37:55.560364    1628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0219 04:37:55.560364    1628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:37:55.563554    1628 out.go:204]   - Booting up control plane ...
	I0219 04:37:55.563554    1628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:37:55.563554    1628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:37:55.564376    1628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:37:55.564818    1628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:37:55.565186    1628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:37:55.565813    1628 kubeadm.go:322] [apiclient] All control plane components are healthy after 15.506139 seconds
	I0219 04:37:55.566093    1628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:37:55.566449    1628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:37:55.566717    1628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:37:55.566824    1628 kubeadm.go:322] [mark-control-plane] Marking the node nokubernetes-928900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:37:55.566824    1628 kubeadm.go:322] [bootstrap-token] Using token: jt4lcw.y2grqfpwjrsmf3rf
	I0219 04:37:55.570819    1628 out.go:204]   - Configuring RBAC rules ...
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:37:55.572037    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:37:55.572074    1628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:37:55.572074    1628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:37:55.572074    1628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0219 04:37:55.572074    1628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0219 04:37:55.572074    1628 kubeadm.go:322] 
	I0219 04:37:55.572074    1628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0219 04:37:55.572074    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0219 04:37:55.573083    1628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:37:55.573083    1628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0219 04:37:55.574057    1628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:37:55.574057    1628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:37:55.574057    1628 kubeadm.go:322] 
	I0219 04:37:55.574057    1628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:37:55.574057    1628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0219 04:37:55.574057    1628 kubeadm.go:322] 
	I0219 04:37:55.574057    1628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jt4lcw.y2grqfpwjrsmf3rf \
	I0219 04:37:55.575063    1628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:37:55.575063    1628 kubeadm.go:322] 	--control-plane 
	I0219 04:37:55.575063    1628 kubeadm.go:322] 
	I0219 04:37:55.575063    1628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:37:55.575063    1628 kubeadm.go:322] 
	I0219 04:37:55.575063    1628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jt4lcw.y2grqfpwjrsmf3rf \
	I0219 04:37:55.575063    1628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:37:55.576073    1628 cni.go:84] Creating CNI manager for ""
	I0219 04:37:55.576073    1628 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:37:55.581257    1628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:37:55.592838    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:37:55.610753    1628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:37:55.680443    1628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:37:55.693121    1628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:37:55.693121    1628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61 minikube.k8s.io/name=NoKubernetes-928900 minikube.k8s.io/updated_at=2023_02_19T04_37_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:37:55.781166    1628 ops.go:34] apiserver oom_adj: -16
	I0219 04:37:56.172163    1628 kubeadm.go:1073] duration metric: took 491.668ms to wait for elevateKubeSystemPrivileges.
	I0219 04:37:56.281141    1628 kubeadm.go:403] StartCluster complete in 22.5161496s
	I0219 04:37:56.281195    1628 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:56.281385    1628 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:37:56.282727    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:56.284060    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:37:56.284060    1628 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:37:56.284060    1628 addons.go:65] Setting storage-provisioner=true in profile "NoKubernetes-928900"
	I0219 04:37:56.284060    1628 addons.go:227] Setting addon storage-provisioner=true in "NoKubernetes-928900"
	I0219 04:37:56.284619    1628 addons.go:65] Setting default-storageclass=true in profile "NoKubernetes-928900"
	I0219 04:37:56.284692    1628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "NoKubernetes-928900"
	I0219 04:37:56.284692    1628 host.go:66] Checking if "NoKubernetes-928900" exists ...
	I0219 04:37:56.284769    1628 config.go:182] Loaded profile config "NoKubernetes-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:37:56.285492    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:56.286268    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:56.525686    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0219 04:37:56.878661    1628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "NoKubernetes-928900" context rescaled to 1 replicas
	I0219 04:37:56.878711    1628 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.255.137 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:37:56.882974    1628 out.go:177] * Verifying Kubernetes components...
	I0219 04:37:56.900866    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:37:57.118198    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:57.118198    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:57.127493    1628 addons.go:227] Setting addon default-storageclass=true in "NoKubernetes-928900"
	I0219 04:37:57.127493    1628 host.go:66] Checking if "NoKubernetes-928900" exists ...
	I0219 04:37:57.131747    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:57.140786    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:57.140786    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:57.143847    1628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:37:57.146838    1628 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:37:57.146838    1628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0219 04:37:57.146838    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:54.905229    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:54.905288    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:54.905288    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:56.071828    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:56.072099    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:57.086375    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:58.034558    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:58.034558    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:58.034717    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:59.475309    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:37:59.475309    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:59.475309    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:58.159525    1628 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0219 04:37:58.159525    1628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:58.316353    1628 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.790673s)
	I0219 04:37:58.316353    1628 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4154917s)
	I0219 04:37:58.316353    1628 start.go:921] {"host.minikube.internal": 172.28.240.1} host record injected into CoreDNS's ConfigMap
	I0219 04:37:58.319366    1628 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:37:58.334342    1628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:37:58.371426    1628 api_server.go:71] duration metric: took 1.4926014s to wait for apiserver process to appear ...
	I0219 04:37:58.371426    1628 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:37:58.371426    1628 api_server.go:252] Checking apiserver healthz at https://172.28.255.137:8443/healthz ...
	I0219 04:37:58.385424    1628 api_server.go:278] https://172.28.255.137:8443/healthz returned 200:
	ok
	I0219 04:37:58.387811    1628 api_server.go:140] control plane version: v1.26.1
	I0219 04:37:58.387811    1628 api_server.go:130] duration metric: took 16.3846ms to wait for apiserver health ...
	I0219 04:37:58.387811    1628 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:37:58.398646    1628 system_pods.go:59] 4 kube-system pods found
	I0219 04:37:58.398646    1628 system_pods.go:61] "etcd-nokubernetes-928900" [da445a25-54e8-49f9-a05c-fbec08f3c301] Running
	I0219 04:37:58.398646    1628 system_pods.go:61] "kube-apiserver-nokubernetes-928900" [ac2e3c35-608b-4b25-bf70-ea5a9ecad8ec] Pending
	I0219 04:37:58.398646    1628 system_pods.go:61] "kube-controller-manager-nokubernetes-928900" [c57893c8-3803-4d5f-8687-7b92d776c31a] Pending
	I0219 04:37:58.398646    1628 system_pods.go:61] "kube-scheduler-nokubernetes-928900" [9920b009-84bd-4d92-9cd5-b113c2d48396] Pending
	I0219 04:37:58.398646    1628 system_pods.go:74] duration metric: took 10.8353ms to wait for pod list to return data ...
	I0219 04:37:58.398646    1628 kubeadm.go:578] duration metric: took 1.5198213s to wait for : map[apiserver:true system_pods:true] ...
	I0219 04:37:58.398646    1628 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:37:58.404048    1628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:37:58.404048    1628 node_conditions.go:123] node cpu capacity is 2
	I0219 04:37:58.404136    1628 node_conditions.go:105] duration metric: took 5.4899ms to run NodePressure ...
	I0219 04:37:58.404136    1628 start.go:228] waiting for startup goroutines ...
	I0219 04:37:59.111470    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:59.111686    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:59.111686    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:59.599936    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:37:59.599936    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:59.599936    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:37:59.810595    1628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:38:00.609278    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:38:00.609278    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:00.609278    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:38:00.766251    1628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0219 04:38:01.247500    1628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0219 04:38:01.250831    1628 addons.go:492] enable addons completed in 4.9667875s: enabled=[storage-provisioner default-storageclass]
	I0219 04:38:01.250831    1628 start.go:233] waiting for cluster config update ...
	I0219 04:38:01.250831    1628 start.go:242] writing updated cluster config ...
	I0219 04:38:01.274829    1628 ssh_runner.go:195] Run: rm -f paused
	I0219 04:38:01.523334    1628 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:38:01.525557    1628 out.go:177] 
	W0219 04:38:01.529133    1628 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:38:01.534844    1628 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:38:01.538275    1628 out.go:177] * Done! kubectl is now configured to use "NoKubernetes-928900" cluster and "default" namespace by default
	I0219 04:38:00.530599    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:00.530656    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:00.530716    8340 machine.go:88] provisioning docker machine ...
	I0219 04:38:00.530772    8340 buildroot.go:166] provisioning hostname "kubernetes-upgrade-803700"
	I0219 04:38:00.530994    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:01.476338    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:01.476338    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:01.476338    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:02.720759    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:02.720759    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:02.726537    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:02.727504    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:02.727698    8340 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-803700 && echo "kubernetes-upgrade-803700" | sudo tee /etc/hostname
	I0219 04:38:02.918570    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803700
	
	I0219 04:38:02.918725    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:03.860483    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:03.860558    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:03.860636    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:05.203366    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:05.203617    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:05.208662    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:05.209394    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:05.209394    8340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-803700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-803700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-803700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:38:05.382435    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:38:05.382435    8340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:38:05.382435    8340 buildroot.go:174] setting up certificates
	I0219 04:38:05.382435    8340 provision.go:83] configureAuth start
	I0219 04:38:05.382435    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:06.202331    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:06.202381    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:06.202480    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:07.370134    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:07.370320    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:07.370387    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:08.252434    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:08.252434    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:08.252434    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:09.501367    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:09.501367    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:09.501367    8340 provision.go:138] copyHostCerts
	I0219 04:38:09.501367    8340 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:38:09.501367    8340 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:38:09.502420    8340 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:38:09.504287    8340 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:38:09.504287    8340 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:38:09.504842    8340 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:38:09.506287    8340 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:38:09.506385    8340 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:38:09.506492    8340 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:38:09.508691    8340 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-803700 san=[172.28.248.155 172.28.248.155 localhost 127.0.0.1 minikube kubernetes-upgrade-803700]
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-02-19 04:36:23 UTC, ends at Sun 2023-02-19 04:38:16 UTC. --
	Feb 19 04:37:45 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:37:45.684465536Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/db500d906c580115a4986a3716cad00f9f57413dac36dd37a9a4c643e6939da9 pid=2067 runtime=io.containerd.runc.v2
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771576308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771651009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771664809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771919911Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6d75146e32ea3c26a5262ce325a5ad4fb10a96331a9e80bc79ed561325344020 pid=2971 runtime=io.containerd.runc.v2
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.789927823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.790089924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.790118424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.790700428Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2593c2cd0913c8204efb90c659a5c84b35dc6067512c0c6a18ee2dd6433747c9 pid=2981 runtime=io.containerd.runc.v2
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.818854504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.819462208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.819609209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.821728322Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f22645641fe3914cc9f61da31c443213dba270e6e5e9989f4b0cb6f52b96a23e pid=3006 runtime=io.containerd.runc.v2
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.394777943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.394921543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.394957344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.395193845Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1276cb3c8928d61f0079ea963da4db8c5c878610d1bfcfeab39768397f7310ef pid=3101 runtime=io.containerd.runc.v2
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160394888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160572389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160592389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160981091Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6a8f65c7149c935d2c68cc1e114ccc237aa001c21c97eb6a986bd175cd926cbd pid=3252 runtime=io.containerd.runc.v2
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.449174306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.449321507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.449344807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.450335313Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8405542e120b22922cbba6af67964f8ed8b360675547a1827789be92c5cbcf9a pid=3328 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8405542e120b2       6e38f40d628db       4 seconds ago       Running             storage-provisioner       0                   2593c2cd0913c
	6a8f65c7149c9       5185b96f0becf       5 seconds ago       Running             coredns                   0                   6d75146e32ea3
	1276cb3c8928d       46a6bb3c77ce0       5 seconds ago       Running             kube-proxy                0                   f22645641fe39
	db500d906c580       655493523f607       32 seconds ago      Running             kube-scheduler            0                   273c0385ca3db
	7333f9f1213d0       fce326961ae2d       32 seconds ago      Running             etcd                      0                   839cd21b300ae
	ccab12d1308d0       deb04688c4a35       32 seconds ago      Running             kube-apiserver            0                   89f488b5352d4
	1ec61dba91d1e       e9c08e11b07f6       33 seconds ago      Running             kube-controller-manager   0                   e48d5d98d9926
	
	* 
	* ==> coredns [6a8f65c7149c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = dc373b1a880fdd4ccb700cff30600cc4bf8c50378309c853254a8500867351a3e9142cc9578843a443961b28e6690d646b490f89e043822a41fbe79aabc9a951
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:33633 - 34572 "HINFO IN 7078759061569732506.7153018791202959406. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034606205s
	
	* 
	* ==> describe nodes <==
	* Name:               nokubernetes-928900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=nokubernetes-928900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61
	                    minikube.k8s.io/name=NoKubernetes-928900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_19T04_37_55_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:37:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  nokubernetes-928900
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:38:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.255.137
	  Hostname:    nokubernetes-928900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cc83cb192f045599c8425830a963be6
	  System UUID:                250dded1-93d2-1140-b45a-92b4cf99cb94
	  Boot ID:                    46c8b0ab-a693-48a0-ab80-719ecf84a1da
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-jnhkk                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     9s
	  kube-system                 etcd-nokubernetes-928900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         21s
	  kube-system                 kube-apiserver-nokubernetes-928900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 kube-controller-manager-nokubernetes-928900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 kube-proxy-lhrch                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-scheduler-nokubernetes-928900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  38s (x6 over 38s)  kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x6 over 38s)  kubelet          Node nokubernetes-928900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x5 over 38s)  kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientPID
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s                kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s                kubelet          Node nokubernetes-928900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s                kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21s                kubelet          Node nokubernetes-928900 status is now: NodeReady
	  Normal  RegisteredNode           9s                 node-controller  Node nokubernetes-928900 event: Registered Node nokubernetes-928900 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000078] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.411145] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.501864] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.274578] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.689277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +17.465909] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.160541] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[Feb19 04:37] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[ +13.600114] kauditd_printk_skb: 14 callbacks suppressed
	[  +2.175386] systemd-fstab-generator[1080]: Ignoring "noauto" for root device
	[  +1.252246] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +0.183233] systemd-fstab-generator[1129]: Ignoring "noauto" for root device
	[  +0.187655] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
	[  +2.075209] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.065085] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +0.195140] systemd-fstab-generator[1300]: Ignoring "noauto" for root device
	[  +0.195548] systemd-fstab-generator[1311]: Ignoring "noauto" for root device
	[  +0.181230] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +6.792402] systemd-fstab-generator[1570]: Ignoring "noauto" for root device
	[  +0.830935] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.166236] systemd-fstab-generator[2542]: Ignoring "noauto" for root device
	[Feb19 04:38] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [7333f9f1213d] <==
	* {"level":"info","ts":"2023-02-19T04:37:56.791Z","caller":"traceutil/trace.go:171","msg":"trace[1540700630] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:269; }","duration":"153.004203ms","start":"2023-02-19T04:37:56.638Z","end":"2023-02-19T04:37:56.791Z","steps":["trace[1540700630] 'range keys from in-memory index tree'  (duration: 152.7262ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:37:59.763Z","caller":"traceutil/trace.go:171","msg":"trace[756971240] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"135.612348ms","start":"2023-02-19T04:37:59.628Z","end":"2023-02-19T04:37:59.763Z","steps":["trace[756971240] 'process raft request'  (duration: 135.494647ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:02.177Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"247.363952ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10356456599194061833 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-apiserver-5morxluv7gis5uzvafejd57a6m\" mod_revision:49 > success:<request_put:<key:\"/registry/leases/kube-system/kube-apiserver-5morxluv7gis5uzvafejd57a6m\" value_size:589 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-apiserver-5morxluv7gis5uzvafejd57a6m\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:38:02.178Z","caller":"traceutil/trace.go:171","msg":"trace[1546745065] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"490.671407ms","start":"2023-02-19T04:38:01.687Z","end":"2023-02-19T04:38:02.178Z","steps":["trace[1546745065] 'process raft request'  (duration: 242.095245ms)","trace[1546745065] 'compare'  (duration: 247.168951ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:02.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:01.687Z","time spent":"490.764507ms","remote":"127.0.0.1:50620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":667,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-apiserver-5morxluv7gis5uzvafejd57a6m\" mod_revision:49 > success:<request_put:<key:\"/registry/leases/kube-system/kube-apiserver-5morxluv7gis5uzvafejd57a6m\" value_size:589 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-apiserver-5morxluv7gis5uzvafejd57a6m\" > >"}
	{"level":"info","ts":"2023-02-19T04:38:02.178Z","caller":"traceutil/trace.go:171","msg":"trace[1836251581] linearizableReadLoop","detail":"{readStateIndex:311; appliedIndex:310; }","duration":"375.634182ms","start":"2023-02-19T04:38:01.802Z","end":"2023-02-19T04:38:02.178Z","steps":["trace[1836251581] 'read index received'  (duration: 126.578817ms)","trace[1836251581] 'applied index is now lower than readState.Index'  (duration: 249.053565ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:02.178Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"375.915984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-nokubernetes-928900\" ","response":"range_response_count:1 size:5408"}
	{"level":"info","ts":"2023-02-19T04:38:02.178Z","caller":"traceutil/trace.go:171","msg":"trace[665181997] range","detail":"{range_begin:/registry/pods/kube-system/etcd-nokubernetes-928900; range_end:; response_count:1; response_revision:300; }","duration":"376.119586ms","start":"2023-02-19T04:38:01.802Z","end":"2023-02-19T04:38:02.178Z","steps":["trace[665181997] 'agreement among raft nodes before linearized reading'  (duration: 375.882784ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:02.179Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:01.802Z","time spent":"376.396489ms","remote":"127.0.0.1:50590","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5430,"request content":"key:\"/registry/pods/kube-system/etcd-nokubernetes-928900\" "}
	{"level":"info","ts":"2023-02-19T04:38:02.479Z","caller":"traceutil/trace.go:171","msg":"trace[372720543] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"287.693653ms","start":"2023-02-19T04:38:02.191Z","end":"2023-02-19T04:38:02.479Z","steps":["trace[372720543] 'process raft request'  (duration: 245.462022ms)","trace[372720543] 'compare'  (duration: 42.06973ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-19T04:38:06.150Z","caller":"traceutil/trace.go:171","msg":"trace[239007215] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"367.272336ms","start":"2023-02-19T04:38:05.783Z","end":"2023-02-19T04:38:06.150Z","steps":["trace[239007215] 'process raft request'  (duration: 367.097735ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:06.150Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:05.783Z","time spent":"367.380537ms","remote":"127.0.0.1:50590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4296,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" mod_revision:284 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" value_size:4227 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" > >"}
	{"level":"warn","ts":"2023-02-19T04:38:06.679Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"305.219658ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10356456599194061851 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" mod_revision:251 > success:<request_put:<key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" value_size:504 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:38:06.679Z","caller":"traceutil/trace.go:171","msg":"trace[1493860130] linearizableReadLoop","detail":"{readStateIndex:316; appliedIndex:315; }","duration":"520.197979ms","start":"2023-02-19T04:38:06.159Z","end":"2023-02-19T04:38:06.679Z","steps":["trace[1493860130] 'read index received'  (duration: 214.651418ms)","trace[1493860130] 'applied index is now lower than readState.Index'  (duration: 305.545061ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-19T04:38:06.680Z","caller":"traceutil/trace.go:171","msg":"trace[782785429] transaction","detail":"{read_only:false; response_revision:304; number_of_response:1; }","duration":"597.928528ms","start":"2023-02-19T04:38:06.082Z","end":"2023-02-19T04:38:06.680Z","steps":["trace[782785429] 'process raft request'  (duration: 290.995457ms)","trace[782785429] 'compare'  (duration: 305.036757ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:06.680Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:06.082Z","time spent":"598.110029ms","remote":"127.0.0.1:50620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":564,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" mod_revision:251 > success:<request_put:<key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" value_size:504 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" > >"}
	{"level":"warn","ts":"2023-02-19T04:38:06.682Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"266.403184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-19T04:38:06.684Z","caller":"traceutil/trace.go:171","msg":"trace[642132320] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:304; }","duration":"268.259797ms","start":"2023-02-19T04:38:06.416Z","end":"2023-02-19T04:38:06.684Z","steps":["trace[642132320] 'agreement among raft nodes before linearized reading'  (duration: 266.251883ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:06.684Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"525.077113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" ","response":"range_response_count:1 size:4311"}
	{"level":"info","ts":"2023-02-19T04:38:06.685Z","caller":"traceutil/trace.go:171","msg":"trace[1706596489] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-nokubernetes-928900; range_end:; response_count:1; response_revision:304; }","duration":"526.666524ms","start":"2023-02-19T04:38:06.159Z","end":"2023-02-19T04:38:06.685Z","steps":["trace[1706596489] 'agreement among raft nodes before linearized reading'  (duration: 520.911584ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:06.686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:06.159Z","time spent":"527.024527ms","remote":"127.0.0.1:50590","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":1,"response size":4333,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" "}
	{"level":"info","ts":"2023-02-19T04:38:07.023Z","caller":"traceutil/trace.go:171","msg":"trace[1381122908] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"317.34894ms","start":"2023-02-19T04:38:06.706Z","end":"2023-02-19T04:38:07.023Z","steps":["trace[1381122908] 'process raft request'  (duration: 220.351658ms)","trace[1381122908] 'compare'  (duration: 96.909482ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:07.023Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:06.706Z","time spent":"317.551041ms","remote":"127.0.0.1:50590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4104,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" mod_revision:303 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" value_size:4035 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" > >"}
	{"level":"info","ts":"2023-02-19T04:38:15.925Z","caller":"traceutil/trace.go:171","msg":"trace[1786140760] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"243.524782ms","start":"2023-02-19T04:38:15.681Z","end":"2023-02-19T04:38:15.925Z","steps":["trace[1786140760] 'process raft request'  (duration: 243.327881ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:38:16.926Z","caller":"traceutil/trace.go:171","msg":"trace[1853276706] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"179.894697ms","start":"2023-02-19T04:38:16.746Z","end":"2023-02-19T04:38:16.926Z","steps":["trace[1853276706] 'process raft request'  (duration: 179.716896ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:38:17 up 2 min,  0 users,  load average: 2.29, 0.70, 0.25
	Linux NoKubernetes-928900 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ccab12d1308d] <==
	* I0219 04:37:51.167316       1 controller.go:615] quota admission added evaluator for: namespaces
	I0219 04:37:51.362901       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0219 04:37:51.603166       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0219 04:37:52.016732       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0219 04:37:52.038384       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0219 04:37:52.038545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0219 04:37:53.407267       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0219 04:37:53.489614       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0219 04:37:53.704828       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0219 04:37:53.721146       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.28.255.137]
	I0219 04:37:53.723035       1 controller.go:615] quota admission added evaluator for: endpoints
	I0219 04:37:53.745755       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0219 04:37:54.098836       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0219 04:37:55.426489       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0219 04:37:55.459845       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0219 04:37:55.476900       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0219 04:38:06.683345       1 trace.go:219] Trace[565429588]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9effddd2-829a-4208-bd01-7c8a72431b6c,client:172.28.255.137,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/nokubernetes-928900,user-agent:kubelet/v1.26.1 (linux/amd64) kubernetes/8f94681,verb:PUT (19-Feb-2023 04:38:06.080) (total time: 602ms):
	Trace[565429588]: ["GuaranteedUpdate etcd3" audit-id:9effddd2-829a-4208-bd01-7c8a72431b6c,key:/leases/kube-node-lease/nokubernetes-928900,type:*coordination.Lease,resource:leases.coordination.k8s.io 602ms (04:38:06.080)
	Trace[565429588]:  ---"Txn call completed" 601ms (04:38:06.682)]
	Trace[565429588]: [602.934063ms] [602.934063ms] END
	I0219 04:38:06.689056       1 trace.go:219] Trace[1682182241]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d850065f-9249-4115-9ff4-727847696731,client:172.28.255.137,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-scheduler-nokubernetes-928900,user-agent:kubelet/v1.26.1 (linux/amd64) kubernetes/8f94681,verb:GET (19-Feb-2023 04:38:06.158) (total time: 530ms):
	Trace[1682182241]: ---"About to write a response" 530ms (04:38:06.688)
	Trace[1682182241]: [530.454151ms] [530.454151ms] END
	I0219 04:38:08.257914       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0219 04:38:08.276040       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [1ec61dba91d1] <==
	* I0219 04:38:08.117091       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0219 04:38:08.117528       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0219 04:38:08.117954       1 taint_manager.go:211] "Sending events to api server"
	I0219 04:38:08.132713       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0219 04:38:08.133959       1 node_lifecycle_controller.go:1053] Missing timestamp for Node nokubernetes-928900. Assuming now as a timestamp.
	I0219 04:38:08.134350       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0219 04:38:08.132997       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0219 04:38:08.136176       1 shared_informer.go:280] Caches are synced for persistent volume
	I0219 04:38:08.138077       1 event.go:294] "Event occurred" object="nokubernetes-928900" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node nokubernetes-928900 event: Registered Node nokubernetes-928900 in Controller"
	I0219 04:38:08.143846       1 shared_informer.go:280] Caches are synced for attach detach
	I0219 04:38:08.144621       1 shared_informer.go:280] Caches are synced for crt configmap
	I0219 04:38:08.147179       1 shared_informer.go:280] Caches are synced for HPA
	I0219 04:38:08.148621       1 shared_informer.go:280] Caches are synced for expand
	I0219 04:38:08.169933       1 range_allocator.go:372] Set node nokubernetes-928900 PodCIDR to [10.244.0.0/24]
	I0219 04:38:08.199059       1 shared_informer.go:280] Caches are synced for deployment
	I0219 04:38:08.203826       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0219 04:38:08.242423       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:38:08.250298       1 shared_informer.go:280] Caches are synced for disruption
	I0219 04:38:08.287764       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:38:08.300479       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 1"
	I0219 04:38:08.301195       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lhrch"
	I0219 04:38:08.407652       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-jnhkk"
	I0219 04:38:08.618798       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:38:08.622027       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:38:08.622178       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [1276cb3c8928] <==
	* I0219 04:38:12.882857       1 node.go:163] Successfully retrieved node IP: 172.28.255.137
	I0219 04:38:12.883032       1 server_others.go:109] "Detected node IP" address="172.28.255.137"
	I0219 04:38:12.883102       1 server_others.go:535] "Using iptables proxy"
	I0219 04:38:12.995878       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:38:12.995973       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:38:12.996289       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:38:12.996677       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:38:12.996691       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:38:12.997986       1 config.go:317] "Starting service config controller"
	I0219 04:38:12.998026       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:38:12.998059       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:38:12.998064       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:38:13.005963       1 config.go:444] "Starting node config controller"
	I0219 04:38:13.006139       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:38:13.099173       1 shared_informer.go:280] Caches are synced for service config
	I0219 04:38:13.099321       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:38:13.107382       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [db500d906c58] <==
	* W0219 04:37:52.087077       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0219 04:37:52.087108       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0219 04:37:52.169403       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0219 04:37:52.169455       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0219 04:37:52.188545       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0219 04:37:52.190402       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0219 04:37:52.233730       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0219 04:37:52.233841       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0219 04:37:52.250044       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0219 04:37:52.250093       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0219 04:37:52.252066       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0219 04:37:52.252178       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0219 04:37:52.328919       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0219 04:37:52.328984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0219 04:37:52.337320       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0219 04:37:52.337365       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0219 04:37:52.546029       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0219 04:37:52.546064       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0219 04:37:52.596615       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0219 04:37:52.596680       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0219 04:37:52.640485       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0219 04:37:52.640570       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0219 04:37:52.648370       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0219 04:37:52.648580       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0219 04:37:55.889628       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-02-19 04:36:23 UTC, ends at Sun 2023-02-19 04:38:17 UTC. --
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: W0219 04:38:08.461609    2567 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:nokubernetes-928900" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'nokubernetes-928900' and this object
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: E0219 04:38:08.461835    2567 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:nokubernetes-928900" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'nokubernetes-928900' and this object
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.495772    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-config-volume\") pod \"coredns-787d4945fb-jnhkk\" (UID: \"014f9e4c-8fef-491f-8fea-aa8c38cdaba4\") " pod="kube-system/coredns-787d4945fb-jnhkk"
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.495929    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzpb\" (UniqueName: \"kubernetes.io/projected/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-kube-api-access-btzpb\") pod \"storage-provisioner\" (UID: \"10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd\") " pod="kube-system/storage-provisioner"
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.496021    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-tmp\") pod \"storage-provisioner\" (UID: \"10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd\") " pod="kube-system/storage-provisioner"
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.496116    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrh99\" (UniqueName: \"kubernetes.io/projected/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-kube-api-access-xrh99\") pod \"coredns-787d4945fb-jnhkk\" (UID: \"014f9e4c-8fef-491f-8fea-aa8c38cdaba4\") " pod="kube-system/coredns-787d4945fb-jnhkk"
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.497617    2567 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.497734    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26e578c4-0e56-491d-b765-a3d763882b2f-kube-proxy podName:26e578c4-0e56-491d-b765-a3d763882b2f nodeName:}" failed. No retries permitted until 2023-02-19 04:38:09.997709991 +0000 UTC m=+14.648335787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/26e578c4-0e56-491d-b765-a3d763882b2f-kube-proxy") pod "kube-proxy-lhrch" (UID: "26e578c4-0e56-491d-b765-a3d763882b2f") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.544841    2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.544889    2567 projected.go:198] Error preparing data for projected volume kube-api-access-ffb52 for pod kube-system/kube-proxy-lhrch: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.544974    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26e578c4-0e56-491d-b765-a3d763882b2f-kube-api-access-ffb52 podName:26e578c4-0e56-491d-b765-a3d763882b2f nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.044952901 +0000 UTC m=+14.695578597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ffb52" (UniqueName: "kubernetes.io/projected/26e578c4-0e56-491d-b765-a3d763882b2f-kube-api-access-ffb52") pod "kube-proxy-lhrch" (UID: "26e578c4-0e56-491d-b765-a3d763882b2f") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.597956    2567 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.598516    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-config-volume podName:014f9e4c-8fef-491f-8fea-aa8c38cdaba4 nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.098487052 +0000 UTC m=+14.749112748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-config-volume") pod "coredns-787d4945fb-jnhkk" (UID: "014f9e4c-8fef-491f-8fea-aa8c38cdaba4") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.609274    2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.610111    2567 projected.go:198] Error preparing data for projected volume kube-api-access-btzpb for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.610299    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-kube-api-access-btzpb podName:10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.11028393 +0000 UTC m=+14.760909626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-btzpb" (UniqueName: "kubernetes.io/projected/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-kube-api-access-btzpb") pod "storage-provisioner" (UID: "10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.743749    2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.743820    2567 projected.go:198] Error preparing data for projected volume kube-api-access-xrh99 for pod kube-system/coredns-787d4945fb-jnhkk: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.743891    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-kube-api-access-xrh99 podName:014f9e4c-8fef-491f-8fea-aa8c38cdaba4 nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.243871206 +0000 UTC m=+14.894496902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xrh99" (UniqueName: "kubernetes.io/projected/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-kube-api-access-xrh99") pod "coredns-787d4945fb-jnhkk" (UID: "014f9e4c-8fef-491f-8fea-aa8c38cdaba4") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:12 NoKubernetes-928900 kubelet[2567]: I0219 04:38:12.825882    2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d75146e32ea3c26a5262ce325a5ad4fb10a96331a9e80bc79ed561325344020"
	Feb 19 04:38:13 NoKubernetes-928900 kubelet[2567]: I0219 04:38:13.169970    2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2593c2cd0913c8204efb90c659a5c84b35dc6067512c0c6a18ee2dd6433747c9"
	Feb 19 04:38:14 NoKubernetes-928900 kubelet[2567]: I0219 04:38:14.267734    2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lhrch" podStartSLOduration=6.267671341 pod.CreationTimestamp="2023-02-19 04:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:38:14.233377542 +0000 UTC m=+18.884003238" watchObservedRunningTime="2023-02-19 04:38:14.267671341 +0000 UTC m=+18.918297137"
	Feb 19 04:38:14 NoKubernetes-928900 kubelet[2567]: I0219 04:38:14.272659    2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jnhkk" podStartSLOduration=6.272001866 pod.CreationTimestamp="2023-02-19 04:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:38:14.271063861 +0000 UTC m=+18.921689557" watchObservedRunningTime="2023-02-19 04:38:14.272001866 +0000 UTC m=+18.922627662"
	Feb 19 04:38:16 NoKubernetes-928900 kubelet[2567]: I0219 04:38:16.725889    2567 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 19 04:38:16 NoKubernetes-928900 kubelet[2567]: I0219 04:38:16.726966    2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	* 
	* ==> storage-provisioner [8405542e120b] <==
	* I0219 04:38:13.602462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0219 04:38:13.619698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0219 04:38:13.619759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0219 04:38:13.641123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0219 04:38:13.642006       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba74618a-6aac-427d-a93e-d509b288709a", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' NoKubernetes-928900_60717077-f010-4865-9bf0-d40e89b4d863 became leader
	I0219 04:38:13.642462       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_NoKubernetes-928900_60717077-f010-4865-9bf0-d40e89b4d863!
	I0219 04:38:13.743830       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_NoKubernetes-928900_60717077-f010-4865-9bf0-d40e89b4d863!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-928900 -n NoKubernetes-928900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-928900 -n NoKubernetes-928900: (5.4048732s)
helpers_test.go:261: (dbg) Run:  kubectl --context NoKubernetes-928900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestNoKubernetes/serial/StartWithK8s FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-928900 -n NoKubernetes-928900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-928900 -n NoKubernetes-928900: (5.5738724s)
helpers_test.go:244: <<< TestNoKubernetes/serial/StartWithK8s FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-928900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-928900 logs -n 25: (5.2802644s)
helpers_test.go:252: TestNoKubernetes/serial/StartWithK8s logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status docker --all                        |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat docker                                 |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/docker/daemon.json                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo docker                         | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | system info                                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status cri-docker                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat cri-docker                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | cri-dockerd --version                                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status containerd                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat containerd                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat                            | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/containerd/config.toml                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | containerd config dump                               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status crio --all                          |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                                | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat crio --no-pager                        |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo find                           | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo crio                           | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | config                                               |                           |                   |         |                     |                     |
	| delete  | -p cilium-843300                                     | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT | 19 Feb 23 04:33 GMT |
	| start   | -p kubernetes-upgrade-803700                         | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-928900                            | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:35 GMT |
	|         | ssh docker info --format                             |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-928900                         | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:36 GMT |
	| delete  | -p offline-docker-928900                             | offline-docker-928900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:36 GMT | 19 Feb 23 04:37 GMT |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 04:33:24
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 04:33:24.633051    8340 out.go:296] Setting OutFile to fd 892 ...
	I0219 04:33:24.694756    8340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:33:24.694756    8340 out.go:309] Setting ErrFile to fd 864...
	I0219 04:33:24.694756    8340 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:33:24.713738    8340 out.go:303] Setting JSON to false
	I0219 04:33:24.716777    8340 start.go:125] hostinfo: {"hostname":"minikube1","uptime":18194,"bootTime":1676763010,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:33:24.716972    8340 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:33:24.722641    8340 out.go:177] * [kubernetes-upgrade-803700] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:33:24.726230    8340 notify.go:220] Checking for updates...
	I0219 04:33:24.728564    8340 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:33:24.730426    8340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:33:24.733828    8340 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:33:24.736486    8340 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:33:24.738887    8340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:33:24.742919    8340 config.go:182] Loaded profile config "NoKubernetes-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:33:24.743953    8340 config.go:182] Loaded profile config "force-systemd-flag-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:33:24.745105    8340 config.go:182] Loaded profile config "offline-docker-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:33:24.745251    8340 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:33:26.445346    8340 out.go:177] * Using the hyperv driver based on user configuration
	I0219 04:33:26.448560    8340 start.go:296] selected driver: hyperv
	I0219 04:33:26.448560    8340 start.go:857] validating driver "hyperv" against <nil>
	I0219 04:33:26.448691    8340 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:33:26.498468    8340 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0219 04:33:26.499478    8340 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0219 04:33:26.499478    8340 cni.go:84] Creating CNI manager for ""
	I0219 04:33:26.499478    8340 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0219 04:33:26.499478    8340 start_flags.go:319] config:
	{Name:kubernetes-upgrade-803700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-803700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:33:26.500266    8340 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:33:26.503883    8340 out.go:177] * Starting control plane node kubernetes-upgrade-803700 in cluster kubernetes-upgrade-803700
	I0219 04:33:23.051933    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:23.051933    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:23.051933    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-flag-928900 -Count 2
	I0219 04:33:23.900669    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:23.900669    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:23.900956    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-flag-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\boot2docker.iso'
	I0219 04:33:25.095266    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:25.095266    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:25.095435    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-flag-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\disk.vhd'
	I0219 04:33:26.378805    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:26.378805    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:26.378805    5220 main.go:141] libmachine: Starting VM...
	I0219 04:33:26.378923    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-flag-928900
	I0219 04:33:26.507164    8340 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0219 04:33:26.507164    8340 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0219 04:33:26.507532    8340 cache.go:57] Caching tarball of preloaded images
	I0219 04:33:26.507717    8340 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:33:26.508012    8340 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0219 04:33:26.508178    8340 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-803700\config.json ...
	I0219 04:33:26.508431    8340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-803700\config.json: {Name:mk4ddd66e70d2fd67da04bdf61196627efe592a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:33:26.508699    8340 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:33:26.508699    8340 start.go:364] acquiring machines lock for kubernetes-upgrade-803700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:33:28.104143    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:28.104324    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:28.104324    5220 main.go:141] libmachine: Waiting for host to start...
	I0219 04:33:28.104411    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:28.839826    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:28.839885    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:28.840139    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:29.905807    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:29.905807    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:30.907963    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:31.638672    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:31.639004    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:31.639071    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:32.664215    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:32.664265    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:33.667542    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:34.410306    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:34.410383    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:34.410383    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:35.409546    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:35.409546    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:36.422568    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:37.153085    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:37.153177    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:37.153248    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:38.196401    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:38.196621    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:39.199974    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:39.895640    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:39.895890    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:39.895976    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:40.895710    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:40.895740    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:41.897989    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:42.614860    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:42.614994    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:42.614994    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:43.614691    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:43.614691    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:44.629676    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:45.345881    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:45.345932    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:45.345932    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:46.365855    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:46.365941    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:47.380564    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:48.094384    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:48.094433    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:48.094433    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:49.073355    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:49.073555    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:50.074835    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:50.771127    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:50.771127    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:50.771534    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:51.748668    5220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:33:51.748668    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:52.749928    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:53.487679    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:53.487751    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:53.487751    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:54.549580    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:33:54.549580    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:54.549580    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:55.279562    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:55.279562    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:55.279562    5220 machine.go:88] provisioning docker machine ...
	I0219 04:33:55.279562    5220 buildroot.go:166] provisioning hostname "force-systemd-flag-928900"
	I0219 04:33:55.279562    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:55.976533    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:55.976533    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:55.976679    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:56.955529    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:33:56.955620    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:56.960284    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:33:56.968397    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:33:56.968397    5220 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-928900 && echo "force-systemd-flag-928900" | sudo tee /etc/hostname
	I0219 04:33:57.127128    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-928900
	
	I0219 04:33:57.127214    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:57.839697    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:57.839767    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:57.839767    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:33:58.905037    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:33:58.905037    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:58.911571    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:33:58.912582    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:33:58.912665    5220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-928900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-928900/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-928900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:33:59.066420    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:33:59.066420    5220 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:33:59.066420    5220 buildroot.go:174] setting up certificates
	I0219 04:33:59.066420    5220 provision.go:83] configureAuth start
	I0219 04:33:59.066420    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:33:59.819101    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:33:59.819101    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:33:59.819101    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:00.828229    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:00.828229    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:00.828229    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:01.524885    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:01.524885    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:01.524957    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:02.551082    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:02.551116    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:02.551116    5220 provision.go:138] copyHostCerts
	I0219 04:34:02.551116    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0219 04:34:02.551116    5220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:34:02.551645    5220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:34:02.552091    5220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:34:02.553159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0219 04:34:02.553347    5220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:34:02.553427    5220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:34:02.553802    5220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:34:02.554854    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0219 04:34:02.555103    5220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:34:02.555189    5220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:34:02.555238    5220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:34:02.556933    5220 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-928900 san=[172.28.243.54 172.28.243.54 localhost 127.0.0.1 minikube force-systemd-flag-928900]
	I0219 04:34:02.705050    5220 provision.go:172] copyRemoteCerts
	I0219 04:34:02.714030    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:34:02.714030    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:03.439844    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:03.439844    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:03.439844    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:04.426301    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:04.426565    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:04.426733    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:04.535715    5220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.8216578s)
	I0219 04:34:04.535715    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0219 04:34:04.536332    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:34:04.579113    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0219 04:34:04.579510    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0219 04:34:04.626802    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0219 04:34:04.627242    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:34:04.666400    5220 provision.go:86] duration metric: configureAuth took 5.5999509s
	I0219 04:34:04.666428    5220 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:34:04.666541    5220 config.go:182] Loaded profile config "force-systemd-flag-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:34:04.667170    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:05.373061    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:05.373061    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:05.373156    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:06.371169    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:06.371391    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:06.374891    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:06.376018    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:06.376018    5220 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:34:06.520441    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:34:06.520441    5220 buildroot.go:70] root file system type: tmpfs
	I0219 04:34:06.520441    5220 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:34:06.521009    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:07.258090    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:07.258220    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:07.258503    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:08.324853    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:08.324853    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:08.328446    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:08.330317    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:08.330456    5220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:34:08.494858    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:34:08.494934    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:09.207758    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:09.207914    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:09.208155    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:10.198870    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:10.198870    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:10.203255    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:10.204117    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:10.204195    5220 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:34:11.304154    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:34:11.304154    5220 machine.go:91] provisioned docker machine in 16.0246458s
	I0219 04:34:11.304154    5220 client.go:171] LocalClient.Create took 1m1.9476s
	I0219 04:34:11.304154    5220 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-928900" took 1m1.9476s
	I0219 04:34:11.304154    5220 start.go:300] post-start starting for "force-systemd-flag-928900" (driver="hyperv")
	I0219 04:34:11.304154    5220 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:34:11.316745    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:34:11.316745    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:12.035732    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:12.035732    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:12.035732    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:16.974791   11220 start.go:368] acquired machines lock for "offline-docker-928900" in 1m7.6112213s
	I0219 04:34:16.974791   11220 start.go:93] Provisioning new machine with config: &{Name:offline-docker-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.26.1 ClusterName:offline-docker-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:34:16.974791   11220 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 04:34:16.982626   11220 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0219 04:34:16.982626   11220 start.go:159] libmachine.API.Create for "offline-docker-928900" (driver="hyperv")
	I0219 04:34:16.982626   11220 client.go:168] LocalClient.Create starting
	I0219 04:34:16.983629   11220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:34:16.983750   11220 main.go:141] libmachine: Decoding PEM data...
	I0219 04:34:16.983750   11220 main.go:141] libmachine: Parsing certificate...
	I0219 04:34:16.983750   11220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:34:16.984369   11220 main.go:141] libmachine: Decoding PEM data...
	I0219 04:34:16.984369   11220 main.go:141] libmachine: Parsing certificate...
	I0219 04:34:16.984369   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:34:13.110873    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:13.110974    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:13.111270    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:13.223006    5220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9062669s)
	I0219 04:34:13.232694    5220 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:34:13.239326    5220 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:34:13.239418    5220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:34:13.239793    5220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:34:13.240515    5220 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:34:13.240515    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /etc/ssl/certs/101482.pem
	I0219 04:34:13.250731    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:34:13.266340    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:34:13.305602    5220 start.go:303] post-start completed in 2.0014543s
	I0219 04:34:13.308409    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:14.031697    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:14.031961    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:14.032112    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:15.088151    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:15.088151    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:15.088342    5220 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\config.json ...
	I0219 04:34:15.091208    5220 start.go:128] duration metric: createHost completed in 1m5.7415639s
	I0219 04:34:15.091294    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:15.799393    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:15.799393    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:15.799393    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:16.825754    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:16.825754    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:16.829744    5220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:34:16.831010    5220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.243.54 22 <nil> <nil>}
	I0219 04:34:16.831010    5220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:34:16.974379    5220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781256.973246000
	
	I0219 04:34:16.974443    5220 fix.go:207] guest clock: 1676781256.973246000
	I0219 04:34:16.974443    5220 fix.go:220] Guest: 2023-02-19 04:34:16.973246 +0000 GMT Remote: 2023-02-19 04:34:15.0912082 +0000 GMT m=+68.042239001 (delta=1.8820378s)
	I0219 04:34:16.974544    5220 fix.go:191] guest clock delta is within tolerance: 1.8820378s
	I0219 04:34:16.974544    5220 start.go:83] releasing machines lock for "force-systemd-flag-928900", held for 1m7.6250105s
	I0219 04:34:16.974791    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:17.727401    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:17.727474    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:17.727546    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:18.805132    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:18.805300    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:18.808744    5220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:34:18.808795    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:18.817624    5220 ssh_runner.go:195] Run: cat /version.json
	I0219 04:34:18.817624    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:34:19.582675    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:19.583619    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:19.583619    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:19.590221    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:19.590221    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:19.590221    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:20.763846    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:20.763846    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:20.763846    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:20.783576    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:34:20.783576    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:20.783576    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:34:20.875334    5220 ssh_runner.go:235] Completed: cat /version.json: (2.0577174s)
	I0219 04:34:20.886793    5220 ssh_runner.go:195] Run: systemctl --version
	I0219 04:34:21.330614    5220 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.5218783s)
	I0219 04:34:21.340674    5220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:34:21.348893    5220 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:34:21.358782    5220 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:34:21.374760    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:34:21.392276    5220 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:34:21.432718    5220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:34:21.460822    5220 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:34:21.460991    5220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:34:21.469471    5220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:34:21.506714    5220 docker.go:630] Got preloaded images: 
	I0219 04:34:21.506714    5220 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:34:21.517310    5220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:34:21.545059    5220 ssh_runner.go:195] Run: which lz4
	I0219 04:34:21.551467    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0219 04:34:21.561569    5220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:34:21.567710    5220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:34:21.567851    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:34:17.409753   11220 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:34:17.409814   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:17.409902   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:34:18.092537   11220 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:34:18.092537   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:18.092627   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:34:18.615997   11220 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:34:18.616158   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:18.616225   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:34:20.144056   11220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:34:20.144138   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:20.146069   11220 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:34:20.545881   11220 main.go:141] libmachine: Creating SSH key...
	I0219 04:34:21.214518   11220 main.go:141] libmachine: Creating VM...
	I0219 04:34:21.214518   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:34:24.179644    5220 docker.go:594] Took 2.628136 seconds to copy over tarball
	I0219 04:34:24.191171    5220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:34:22.792254   11220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:34:22.792377   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:22.792377   11220 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:34:22.792377   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:34:23.559480   11220 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:34:23.559511   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:23.559511   11220 main.go:141] libmachine: Creating VHD
	I0219 04:34:23.559511   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:34:25.339904   11220 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\fixe
	                          d.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DD08E28-24B0-412E-B53A-50C06EC6A781
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:34:25.339964   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:25.339964   11220 main.go:141] libmachine: Writing magic tar header
	I0219 04:34:25.339964   11220 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:34:25.351267   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 04:34:27.101726   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:27.102164   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:27.102164   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\disk.vhd' -SizeBytes 20000MB
	I0219 04:34:28.491356   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:28.491356   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:28.491356   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM offline-docker-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0219 04:34:34.301355   11220 main.go:141] libmachine: [stdout =====>] : 
	Name                  State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                  ----- ----------- ----------------- ------   ------             -------
	offline-docker-928900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 04:34:34.301355   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:34.301355   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName offline-docker-928900 -DynamicMemoryEnabled $false
	I0219 04:34:35.645677   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:35.645893   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:35.645893   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor offline-docker-928900 -Count 2
	I0219 04:34:36.422631   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:36.422631   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:36.422631   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName offline-docker-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\boot2docker.iso'
	I0219 04:34:34.985084    5220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.7938572s)
	I0219 04:34:34.985149    5220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:34:35.051490    5220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:34:35.069551    5220 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:34:35.112083    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:35.282470    5220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:34:37.665855    5220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.383393s)
	I0219 04:34:37.665997    5220 start.go:485] detecting cgroup driver to use...
	I0219 04:34:37.666023    5220 start.go:489] using "systemd" cgroup driver as enforced via flags
	I0219 04:34:37.666023    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:34:37.699088    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:34:37.725267    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:34:37.741490    5220 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I0219 04:34:37.746201    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0219 04:34:37.775871    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:34:37.801921    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:34:37.830765    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:34:37.861662    5220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:34:37.889904    5220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:34:37.919452    5220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:34:37.948024    5220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:34:37.976090    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:38.170683    5220 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:34:38.199764    5220 start.go:485] detecting cgroup driver to use...
	I0219 04:34:38.199764    5220 start.go:489] using "systemd" cgroup driver as enforced via flags
	I0219 04:34:38.210588    5220 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:34:38.242189    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:34:38.279769    5220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:34:38.325812    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:34:38.355859    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:34:38.388478    5220 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:34:38.452982    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:34:38.475416    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:34:38.520373    5220 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:34:38.690755    5220 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:34:38.850838    5220 docker.go:529] configuring docker to use "systemd" as cgroup driver...
	I0219 04:34:38.850945    5220 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0219 04:34:38.897315    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:39.074481    5220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:34:40.627257    5220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5527819s)
	I0219 04:34:40.638087    5220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:34:40.808939    5220 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:34:40.985950    5220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:34:41.175930    5220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:34:41.348019    5220 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:34:41.372505    5220 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:34:41.384586    5220 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:34:41.393723    5220 start.go:553] Will wait 60s for crictl version
	I0219 04:34:41.403768    5220 ssh_runner.go:195] Run: which crictl
	I0219 04:34:41.428807    5220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:34:41.574548    5220 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:34:41.583792    5220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:34:41.656445    5220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:34:41.703693    5220 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:34:41.703781    5220 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:34:41.715806    5220 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:34:41.726305    5220 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:34:41.726305    5220 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:34:41.737356    5220 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:34:41.743634    5220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:34:41.764011    5220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:34:41.775947    5220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:34:41.811137    5220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:34:41.811231    5220 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:34:41.820239    5220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:34:41.850261    5220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:34:41.850315    5220 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:34:41.859282    5220 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:34:41.902611    5220 cni.go:84] Creating CNI manager for ""
	I0219 04:34:41.902713    5220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:34:41.902713    5220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:34:41.902812    5220 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.243.54 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-928900 NodeName:force-systemd-flag-928900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.243.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.243.54 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:34:41.903069    5220 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.243.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "force-systemd-flag-928900"
	  kubeletExtraArgs:
	    node-ip: 172.28.243.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.243.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:34:41.903274    5220 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=force-systemd-flag-928900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.243.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:force-systemd-flag-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:34:41.912277    5220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:34:41.925629    5220 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:34:41.938451    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:34:41.953398    5220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (458 bytes)
	I0219 04:34:41.980572    5220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:34:42.010175    5220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0219 04:34:42.051484    5220 ssh_runner.go:195] Run: grep 172.28.243.54	control-plane.minikube.internal$ /etc/hosts
	I0219 04:34:42.056876    5220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.243.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:34:42.077846    5220 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900 for IP: 172.28.243.54
	I0219 04:34:42.077950    5220 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.078720    5220 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:34:42.079084    5220 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:34:42.079893    5220 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.key
	I0219 04:34:42.080070    5220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.crt with IP's: []
	I0219 04:34:42.150414    5220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.crt ...
	I0219 04:34:42.150414    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.crt: {Name:mkb85c477f88e6f9cd46fb9c3bea22727d044627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.152090    5220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.key ...
	I0219 04:34:42.152090    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\client.key: {Name:mk850913db40a4f95d41bc69aa74a50088d16df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.152090    5220 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec
	I0219 04:34:42.152090    5220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec with IP's: [172.28.243.54 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:34:37.512665   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:37.512665   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:37.512741   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName offline-docker-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\disk.vhd'
	I0219 04:34:38.789511   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:38.789660   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:38.789660   11220 main.go:141] libmachine: Starting VM...
	I0219 04:34:38.789710   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM offline-docker-928900
	I0219 04:34:40.428286   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:40.428469   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:40.428469   11220 main.go:141] libmachine: Waiting for host to start...
	I0219 04:34:40.428545   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:41.173008   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:41.173040   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:41.173100   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:42.224654   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:42.224725   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:42.338665    5220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec ...
	I0219 04:34:42.339665    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec: {Name:mk61eea8985d9338a76e40825c85fc75f969855d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.340935    5220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec ...
	I0219 04:34:42.340935    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec: {Name:mk86bee79e9bc4255c91c9ed2345fa7459e0068e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.341255    5220 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt.018aadec -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt
	I0219 04:34:42.348537    5220 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key.018aadec -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key
	I0219 04:34:42.350401    5220 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key
	I0219 04:34:42.350509    5220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt with IP's: []
	I0219 04:34:42.488905    5220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt ...
	I0219 04:34:42.488905    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt: {Name:mkcc084ac6037cdb1825a07c409210c107ee7920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.489745    5220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key ...
	I0219 04:34:42.489745    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key: {Name:mk1b6ef1d5e33d0f239dc57861567013b879b61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:34:42.491557    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0219 04:34:42.491557    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0219 04:34:42.491557    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0219 04:34:42.498710    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0219 04:34:42.498971    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0219 04:34:42.499133    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0219 04:34:42.499282    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0219 04:34:42.499421    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0219 04:34:42.499614    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:34:42.500328    5220 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:34:42.500328    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:34:42.500578    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:34:42.500578    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:34:42.501159    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:34:42.501159    5220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:34:42.501159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> /usr/share/ca-certificates/101482.pem
	I0219 04:34:42.501159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:42.501159    5220 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem -> /usr/share/ca-certificates/10148.pem
	I0219 04:34:42.502400    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:34:42.540648    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:34:42.580076    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:34:42.621450    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-928900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0219 04:34:42.665256    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:34:42.704781    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:34:42.744960    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:34:42.796276    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:34:42.843697    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:34:42.884663    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:34:42.933909    5220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:34:42.975689    5220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:34:43.019434    5220 ssh_runner.go:195] Run: openssl version
	I0219 04:34:43.045399    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:34:43.080345    5220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:34:43.087025    5220 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:34:43.098551    5220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:34:43.120688    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:34:43.152455    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:34:43.181454    5220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:43.188587    5220 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:43.196626    5220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:34:43.215894    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:34:43.254301    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:34:43.284862    5220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:34:43.291638    5220 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:34:43.301915    5220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:34:43.319650    5220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:34:43.336652    5220 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.26.1 ClusterName:force-systemd-flag-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.243.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:34:43.345293    5220 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:34:43.386711    5220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:34:43.410197    5220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:34:43.434079    5220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:34:43.449076    5220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:34:43.449165    5220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:34:43.692893    5220 kubeadm.go:322] W0219 04:34:43.682257    1496 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:34:44.264265    5220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:34:43.226025   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:43.965581   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:43.965581   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:43.965898   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:44.962563   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:44.962603   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:45.963293   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:46.659740   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:46.659945   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:46.659999   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:47.641677   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:47.641795   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:48.643900   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:49.373038   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:49.373365   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:49.373365   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:50.375445   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:50.375632   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:51.376536   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:52.069111   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:52.069386   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:52.069500   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:53.132219   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:53.132219   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:54.147392   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:54.903422   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:54.903422   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:54.903422   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:55.943601   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:55.943601   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:56.957757   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:34:57.701107   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:34:57.701107   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:57.701107   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:34:58.744699   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:34:58.744699   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:34:59.746232   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:00.462606   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:00.462606   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:00.462606   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:01.489784   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:01.489889   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:03.504636    5220 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0219 04:35:03.504636    5220 kubeadm.go:322] [preflight] Running pre-flight checks
	I0219 04:35:03.505615    5220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:35:03.506079    5220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:35:03.506436    5220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:35:03.506436    5220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:35:03.510211    5220 out.go:204]   - Generating certificates and keys ...
	I0219 04:35:03.510488    5220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0219 04:35:03.510812    5220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0219 04:35:03.511153    5220 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:35:03.511414    5220 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:35:03.511648    5220 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0219 04:35:03.511842    5220 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0219 04:35:03.511907    5220 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0219 04:35:03.512529    5220 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-928900 localhost] and IPs [172.28.243.54 127.0.0.1 ::1]
	I0219 04:35:03.512725    5220 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0219 04:35:03.513236    5220 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-928900 localhost] and IPs [172.28.243.54 127.0.0.1 ::1]
	I0219 04:35:03.513371    5220 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:35:03.513537    5220 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:35:03.513663    5220 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0219 04:35:03.513663    5220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:35:03.513663    5220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:35:03.514289    5220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:35:03.514289    5220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:35:03.514289    5220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:35:03.515052    5220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:35:03.515052    5220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:35:03.515584    5220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0219 04:35:03.515824    5220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:35:03.518413    5220 out.go:204]   - Booting up control plane ...
	I0219 04:35:03.519029    5220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:35:03.519029    5220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:35:03.519493    5220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:35:03.519723    5220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:35:03.520381    5220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:35:03.520428    5220 kubeadm.go:322] [apiclient] All control plane components are healthy after 14.004312 seconds
	I0219 04:35:03.520428    5220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:35:03.521474    5220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:35:03.521720    5220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:35:03.521897    5220 kubeadm.go:322] [mark-control-plane] Marking the node force-systemd-flag-928900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:35:03.521897    5220 kubeadm.go:322] [bootstrap-token] Using token: hrl8it.336vci6t8g26yai3
	I0219 04:35:03.525897    5220 out.go:204]   - Configuring RBAC rules ...
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:35:03.526134    5220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:35:03.526134    5220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:35:03.526134    5220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0219 04:35:03.526134    5220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0219 04:35:03.526134    5220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:35:03.526134    5220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0219 04:35:03.526134    5220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:35:03.526134    5220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:35:03.526134    5220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0219 04:35:03.526134    5220 kubeadm.go:322] 
	I0219 04:35:03.526134    5220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hrl8it.336vci6t8g26yai3 \
	I0219 04:35:03.526134    5220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:35:03.526134    5220 kubeadm.go:322] 	--control-plane 
	I0219 04:35:03.529066    5220 kubeadm.go:322] 
	I0219 04:35:03.529066    5220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:35:03.529209    5220 kubeadm.go:322] 
	I0219 04:35:03.529339    5220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hrl8it.336vci6t8g26yai3 \
	I0219 04:35:03.529339    5220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:35:03.529339    5220 cni.go:84] Creating CNI manager for ""
	I0219 04:35:03.529339    5220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:35:03.533439    5220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:35:03.553854    5220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:35:03.596864    5220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:35:03.656270    5220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:35:03.668736    5220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:35:03.669994    5220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61 minikube.k8s.io/name=force-systemd-flag-928900 minikube.k8s.io/updated_at=2023_02_19T04_35_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:35:03.696829    5220 ops.go:34] apiserver oom_adj: -16
	I0219 04:35:04.290947    5220 kubeadm.go:1073] duration metric: took 634.626ms to wait for elevateKubeSystemPrivileges.
	I0219 04:35:04.328761    5220 kubeadm.go:403] StartCluster complete in 20.9921778s
	I0219 04:35:04.328872    5220 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:35:04.329102    5220 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:35:04.330729    5220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:35:04.332478    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:35:04.332657    5220 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:35:04.333015    5220 addons.go:65] Setting storage-provisioner=true in profile "force-systemd-flag-928900"
	I0219 04:35:04.333078    5220 addons.go:65] Setting default-storageclass=true in profile "force-systemd-flag-928900"
	I0219 04:35:04.333201    5220 config.go:182] Loaded profile config "force-systemd-flag-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:35:04.333140    5220 addons.go:227] Setting addon storage-provisioner=true in "force-systemd-flag-928900"
	I0219 04:35:04.333297    5220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-928900"
	I0219 04:35:04.333297    5220 host.go:66] Checking if "force-systemd-flag-928900" exists ...
	I0219 04:35:04.334042    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:04.334977    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:04.341301    5220 kapi.go:59] client config for force-systemd-flag-928900: &rest.Config{Host:"https://172.28.243.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil)
, KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:35:04.342293    5220 cert_rotation.go:137] Starting client certificate rotation controller
	I0219 04:35:04.556338    5220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0219 04:35:04.929561    5220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "force-systemd-flag-928900" context rescaled to 1 replicas
	I0219 04:35:04.929561    5220 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.243.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:35:04.936767    5220 out.go:177] * Verifying Kubernetes components...
	I0219 04:35:04.948444    5220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:35:05.141332    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.141381    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.141485    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.141485    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.145405    5220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:35:05.143702    5220 kapi.go:59] client config for force-systemd-flag-928900: &rest.Config{Host:"https://172.28.243.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil)
, KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:35:05.147647    5220 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:35:05.147647    5220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0219 04:35:05.147647    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:05.156052    5220 addons.go:227] Setting addon default-storageclass=true in "force-systemd-flag-928900"
	I0219 04:35:05.156052    5220 host.go:66] Checking if "force-systemd-flag-928900" exists ...
	I0219 04:35:05.157533    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.957384    5220 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0219 04:35:05.957384    5220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:05.957384    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-928900 ).state
	I0219 04:35:06.047430    5220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.4910961s)
	I0219 04:35:06.047430    5220 start.go:921] {"host.minikube.internal": 172.28.240.1} host record injected into CoreDNS's ConfigMap
	I0219 04:35:06.047430    5220 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.0989897s)
	I0219 04:35:06.049419    5220 kapi.go:59] client config for force-systemd-flag-928900: &rest.Config{Host:"https://172.28.243.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\force-systemd-flag-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil)
, KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:35:06.050469    5220 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:35:06.065425    5220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:35:06.102423    5220 api_server.go:71] duration metric: took 1.1719386s to wait for apiserver process to appear ...
	I0219 04:35:06.102423    5220 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:35:06.102423    5220 api_server.go:252] Checking apiserver healthz at https://172.28.243.54:8443/healthz ...
	I0219 04:35:06.126773    5220 api_server.go:278] https://172.28.243.54:8443/healthz returned 200:
	ok
	I0219 04:35:06.129531    5220 api_server.go:140] control plane version: v1.26.1
	I0219 04:35:06.129628    5220 api_server.go:130] duration metric: took 27.205ms to wait for apiserver health ...
	I0219 04:35:06.129628    5220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:35:06.138592    5220 system_pods.go:59] 4 kube-system pods found
	I0219 04:35:06.138592    5220 system_pods.go:61] "etcd-force-systemd-flag-928900" [2ff2300b-fa0a-42bf-b7ce-45d35c3953c3] Pending
	I0219 04:35:06.138592    5220 system_pods.go:61] "kube-apiserver-force-systemd-flag-928900" [20a7d0f0-3512-4fbb-8fdf-137c2eb9660f] Pending
	I0219 04:35:06.138592    5220 system_pods.go:61] "kube-controller-manager-force-systemd-flag-928900" [446ae01f-7e3a-45d4-996c-fdfa93864f49] Running
	I0219 04:35:06.138592    5220 system_pods.go:61] "kube-scheduler-force-systemd-flag-928900" [a8716c8a-b6bd-4e19-8a5e-103af1e47d69] Pending
	I0219 04:35:06.138592    5220 system_pods.go:74] duration metric: took 8.8446ms to wait for pod list to return data ...
	I0219 04:35:06.138592    5220 kubeadm.go:578] duration metric: took 1.2081075s to wait for : map[apiserver:true system_pods:true] ...
	I0219 04:35:06.138592    5220 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:35:06.142599    5220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:35:06.142599    5220 node_conditions.go:123] node cpu capacity is 2
	I0219 04:35:06.142599    5220 node_conditions.go:105] duration metric: took 4.0075ms to run NodePressure ...
	I0219 04:35:06.142599    5220 start.go:228] waiting for startup goroutines ...
	I0219 04:35:06.797916    5220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:06.797916    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:06.797916    5220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:07.151655    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:35:07.151837    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:07.152060    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:35:02.504114   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:03.235447   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:03.235494   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:03.235557   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:04.306185   11220 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:04.306224   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:05.312517   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:06.147589   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:06.147860   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:06.148003   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:07.306178    5220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:35:07.923703    5220 main.go:141] libmachine: [stdout =====>] : 172.28.243.54
	
	I0219 04:35:07.923756    5220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:07.923843    5220 sshutil.go:53] new ssh client: &{IP:172.28.243.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-928900\id_rsa Username:docker}
	I0219 04:35:08.060537    5220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0219 04:35:08.366548    5220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0219 04:35:08.370099    5220 addons.go:492] enable addons completed in 4.037519s: enabled=[storage-provisioner default-storageclass]
	I0219 04:35:08.370140    5220 start.go:233] waiting for cluster config update ...
	I0219 04:35:08.370203    5220 start.go:242] writing updated cluster config ...
	I0219 04:35:08.382006    5220 ssh_runner.go:195] Run: rm -f paused
	I0219 04:35:08.571697    5220 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:35:08.574071    5220 out.go:177] 
	W0219 04:35:08.577048    5220 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:35:08.580398    5220 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:35:08.582869    5220 out.go:177] * Done! kubectl is now configured to use "force-systemd-flag-928900" cluster and "default" namespace by default
	I0219 04:35:07.325007   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:07.325100   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:07.325100   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:08.112980   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:08.112980   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:08.112980   11220 machine.go:88] provisioning docker machine ...
	I0219 04:35:08.112980   11220 buildroot.go:166] provisioning hostname "offline-docker-928900"
	I0219 04:35:08.112980   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:08.838290   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:08.838290   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:08.838290   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:09.910174   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:09.910294   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:09.917432   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:09.918161   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:09.918161   11220 main.go:141] libmachine: About to run SSH command:
	sudo hostname offline-docker-928900 && echo "offline-docker-928900" | sudo tee /etc/hostname
	I0219 04:35:10.097469   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-928900
	
	I0219 04:35:10.097677   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:10.844543   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:10.844543   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:10.844543   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:11.892794   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:11.893005   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:11.897354   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:11.898223   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:11.898223   11220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-928900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-928900/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-928900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:35:12.054424   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:35:12.054424   11220 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:35:12.054424   11220 buildroot.go:174] setting up certificates
	I0219 04:35:12.054424   11220 provision.go:83] configureAuth start
	I0219 04:35:12.054424   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:12.816554   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:12.816554   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:12.816554   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:13.878725   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:13.879128   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:13.879128   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:14.644427   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:14.644740   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:14.644740   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:15.727749   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:15.727995   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:15.728066   11220 provision.go:138] copyHostCerts
	I0219 04:35:15.728066   11220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:35:15.728066   11220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:35:15.728777   11220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:35:15.730146   11220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:35:15.730146   11220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:35:15.730459   11220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:35:15.731677   11220 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:35:15.731677   11220 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:35:15.732178   11220 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:35:15.733165   11220 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.offline-docker-928900 san=[172.28.246.85 172.28.246.85 localhost 127.0.0.1 minikube offline-docker-928900]
	I0219 04:35:16.074222   11220 provision.go:172] copyRemoteCerts
	I0219 04:35:16.084723   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:35:16.085727   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:16.822104   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:16.822104   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:16.822104   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:17.899971   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:17.899971   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:17.899971   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:18.010136   11220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.925419s)
	I0219 04:35:18.010136   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:35:18.052943   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0219 04:35:18.095782   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:35:18.135601   11220 provision.go:86] duration metric: configureAuth took 6.081197s
	I0219 04:35:18.135601   11220 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:35:18.136351   11220 config.go:182] Loaded profile config "offline-docker-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:35:18.136351   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:18.878431   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:18.878661   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:18.878661   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:19.916555   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:19.916623   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:19.923731   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:19.924488   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:19.924488   11220 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:35:20.082546   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:35:20.082601   11220 buildroot.go:70] root file system type: tmpfs
	I0219 04:35:20.082601   11220 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:35:20.082601   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:20.819148   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:20.819148   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:20.819281   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:21.830798   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:21.830798   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:21.834751   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:21.836451   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:21.836451   11220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:35:22.016487   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:35:22.016585   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:22.733731   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:22.733731   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:22.733731   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:23.798958   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:23.798958   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:23.802678   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:23.803544   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:23.803544   11220 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:35:24.889781   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:35:24.889899   11220 machine.go:91] provisioned docker machine in 16.7769744s
	I0219 04:35:24.889899   11220 client.go:171] LocalClient.Create took 1m7.9074971s
	I0219 04:35:24.890006   11220 start.go:167] duration metric: libmachine.API.Create for "offline-docker-928900" took 1m7.9076042s
	I0219 04:35:24.890006   11220 start.go:300] post-start starting for "offline-docker-928900" (driver="hyperv")
	I0219 04:35:24.890006   11220 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:35:24.900188   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:35:24.900188   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:25.634984   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:25.634984   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:25.635074   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:26.681276   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:26.681473   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:26.681862   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:26.791898   11220 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8917162s)
	I0219 04:35:26.802687   11220 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:35:26.808600   11220 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:35:26.808600   11220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:35:26.809262   11220 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:35:26.810319   11220 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:35:26.822430   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:35:26.838434   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:35:26.881871   11220 start.go:303] post-start completed in 1.9918715s
	I0219 04:35:26.886058   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:30.693433    1628 start.go:368] acquired machines lock for "NoKubernetes-928900" in 2m20.9931914s
	I0219 04:35:30.693832    1628 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.26.1 ClusterName:NoKubernetes-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:35:30.693832    1628 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 04:35:27.595294   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:27.595294   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:27.595419   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:28.667384   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:28.667384   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:28.667978   11220 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\config.json ...
	I0219 04:35:28.670918   11220 start.go:128] duration metric: createHost completed in 1m11.6963636s
	I0219 04:35:28.670918   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:29.447502   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:29.447578   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:29.447578   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:30.544615   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:30.544615   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:30.549965   11220 main.go:141] libmachine: Using SSH client type: native
	I0219 04:35:30.550553   11220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.85 22 <nil> <nil>}
	I0219 04:35:30.550553   11220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:35:30.692713   11220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781330.691932700
	
	I0219 04:35:30.692713   11220 fix.go:207] guest clock: 1676781330.691932700
	I0219 04:35:30.692713   11220 fix.go:220] Guest: 2023-02-19 04:35:30.6919327 +0000 GMT Remote: 2023-02-19 04:35:28.6709188 +0000 GMT m=+141.623337201 (delta=2.0210139s)
	I0219 04:35:30.692713   11220 fix.go:191] guest clock delta is within tolerance: 2.0210139s
	I0219 04:35:30.692713   11220 start.go:83] releasing machines lock for "offline-docker-928900", held for 1m13.7181652s
	I0219 04:35:30.692713   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:31.450202   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:31.450347   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:31.450347   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:32.501176   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:32.501353   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:32.505075   11220 out.go:177] * Found network options:
	I0219 04:35:32.507809   11220 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	W0219 04:35:32.510280   11220 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.28.246.85).
	I0219 04:35:32.512846   11220 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I0219 04:35:32.515797   11220 out.go:177]   - http_proxy=172.16.1.1:1
	I0219 04:35:30.698130    1628 out.go:204] * Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0219 04:35:30.698381    1628 start.go:159] libmachine.API.Create for "NoKubernetes-928900" (driver="hyperv")
	I0219 04:35:30.698381    1628 client.go:168] LocalClient.Create starting
	I0219 04:35:30.699431    1628 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:35:30.699431    1628 main.go:141] libmachine: Decoding PEM data...
	I0219 04:35:30.699431    1628 main.go:141] libmachine: Parsing certificate...
	I0219 04:35:30.699971    1628 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:35:30.700126    1628 main.go:141] libmachine: Decoding PEM data...
	I0219 04:35:30.700175    1628 main.go:141] libmachine: Parsing certificate...
	I0219 04:35:30.700305    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:35:31.133048    1628 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:35:31.133048    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:31.133108    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:35:31.816609    1628 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:35:31.816609    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:31.816609    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:35:32.340962    1628 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:35:32.341152    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:32.341235    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:35:32.522805   11220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:35:32.522805   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:32.530797   11220 ssh_runner.go:195] Run: cat /version.json
	I0219 04:35:32.530797   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:35:33.323156   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:33.323316   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:33.323156   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:35:33.323392   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:33.323392   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:33.323465   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:35:34.485126   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:34.485126   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:34.485126   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:34.513879   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:35:34.513974   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:34.513974   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:35:34.585822   11220 ssh_runner.go:235] Completed: cat /version.json: (2.055032s)
	I0219 04:35:34.595846   11220 ssh_runner.go:195] Run: systemctl --version
	I0219 04:35:34.984675   11220 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.4617577s)
	I0219 04:35:34.997250   11220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:35:35.005275   11220 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:35:35.014480   11220 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:35:35.029556   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:35:35.044832   11220 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:35:35.087287   11220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:35:35.115786   11220 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:35:35.115786   11220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:35:35.124069   11220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:35:35.156726   11220 docker.go:630] Got preloaded images: 
	I0219 04:35:35.157267   11220 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:35:35.167983   11220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:35:35.194993   11220 ssh_runner.go:195] Run: which lz4
	I0219 04:35:35.211356   11220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:35:35.217502   11220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:35:35.217502   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:35:33.958832    1628 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:35:33.959023    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:33.960505    1628 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:35:34.364974    1628 main.go:141] libmachine: Creating SSH key...
	I0219 04:35:34.428453    1628 main.go:141] libmachine: Creating VM...
	I0219 04:35:34.428453    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:35:36.003094    1628 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:35:36.003094    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:36.003094    1628 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:35:36.003094    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:35:36.812510    1628 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:35:36.812510    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:36.812510    1628 main.go:141] libmachine: Creating VHD
	I0219 04:35:36.812569    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:35:37.692180   11220 docker.go:594] Took 2.490968 seconds to copy over tarball
	I0219 04:35:37.705175   11220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:35:38.606498    1628 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 80F9A0F7-385E-49A3-9B13-B1EF56610A8C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:35:38.606498    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:38.606498    1628 main.go:141] libmachine: Writing magic tar header
	I0219 04:35:38.606581    1628 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:35:38.614204    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 04:35:40.366151    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:40.366380    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:40.366380    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\disk.vhd' -SizeBytes 20000MB
	I0219 04:35:41.739426    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:41.739426    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:41.739601    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM NoKubernetes-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900' -SwitchName 'Default Switch' -MemoryStartupBytes 6000MB
	I0219 04:35:54.111297    1628 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	NoKubernetes-928900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 04:35:54.111488    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:54.111488    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName NoKubernetes-928900 -DynamicMemoryEnabled $false
	I0219 04:35:56.005276    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:56.005276    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:56.005276    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor NoKubernetes-928900 -Count 2
	I0219 04:35:57.237535    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:57.237535    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:57.237721    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName NoKubernetes-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\boot2docker.iso'
	I0219 04:35:58.217785   11220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (20.5125984s)
	I0219 04:35:58.217785   11220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:35:58.283473   11220 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:35:58.302844   11220 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:35:58.345036   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:35:58.520324   11220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:36:01.970105   11220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4497246s)
	I0219 04:36:01.970105   11220 start.go:485] detecting cgroup driver to use...
	I0219 04:36:01.970105   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:36:02.021878   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:36:02.053631   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:36:02.072510   11220 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:36:02.084584   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:36:02.111086   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:36:02.138040   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:36:02.162879   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:36:02.189803   11220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:36:02.217142   11220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:36:02.250427   11220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:36:02.274633   11220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:35:58.953053    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:35:58.953053    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:35:58.953346    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName NoKubernetes-928900 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\disk.vhd'
	I0219 04:36:01.306037    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:01.306037    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:01.306037    1628 main.go:141] libmachine: Starting VM...
	I0219 04:36:01.306219    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM NoKubernetes-928900
	I0219 04:36:02.306766   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:36:02.498195   11220 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:36:02.526827   11220 start.go:485] detecting cgroup driver to use...
	I0219 04:36:02.536683   11220 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:36:02.566411   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:36:02.602012   11220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:36:02.642865   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:36:02.671286   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:36:02.702194   11220 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:36:02.757697   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:36:02.786774   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:36:02.852480   11220 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:36:03.041749   11220 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:36:03.273126   11220 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:36:03.273126   11220 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:36:03.314943   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:36:03.493694   11220 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:36:05.274393   11220 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7807046s)
	I0219 04:36:05.283409   11220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:36:05.465362   11220 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:36:05.635613   11220 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:36:05.821763   11220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:36:06.011012   11220 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:36:06.043667   11220 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:36:06.054039   11220 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:36:06.062177   11220 start.go:553] Will wait 60s for crictl version
	I0219 04:36:06.073966   11220 ssh_runner.go:195] Run: which crictl
	I0219 04:36:06.089701   11220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:36:06.231041   11220 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:36:06.239052   11220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:36:06.289016   11220 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:36:06.341177   11220 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:36:06.344145   11220 out.go:177]   - env HTTP_PROXY=172.16.1.1:1
	I0219 04:36:06.346141   11220 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:36:06.352058   11220 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:36:06.352449   11220 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:36:06.352449   11220 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:36:06.352449   11220 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:36:06.355806   11220 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:36:06.355894   11220 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:36:06.365756   11220 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:36:06.371249   11220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:36:06.395978   11220 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:36:06.403969   11220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:36:06.442236   11220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:36:06.442236   11220 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:36:06.449389   11220 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:36:06.487149   11220 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:36:06.487149   11220 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:36:06.495245   11220 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:36:06.554935   11220 cni.go:84] Creating CNI manager for ""
	I0219 04:36:06.554935   11220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:36:06.554935   11220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:36:06.554935   11220 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.246.85 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-docker-928900 NodeName:offline-docker-928900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.246.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.246.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:36:06.554935   11220 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.246.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "offline-docker-928900"
	  kubeletExtraArgs:
	    node-ip: 172.28.246.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.246.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:36:06.555540   11220 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=offline-docker-928900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.246.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:offline-docker-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:36:06.566785   11220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:36:06.583546   11220 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:36:06.595663   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:36:06.611289   11220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (454 bytes)
	I0219 04:36:06.641279   11220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:36:06.671272   11220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0219 04:36:06.709458   11220 ssh_runner.go:195] Run: grep 172.28.246.85	control-plane.minikube.internal$ /etc/hosts
	I0219 04:36:06.715860   11220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.246.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:36:06.738245   11220 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900 for IP: 172.28.246.85
	I0219 04:36:06.738245   11220 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:06.738907   11220 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:36:06.739686   11220 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:36:06.740574   11220 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.key
	I0219 04:36:06.740772   11220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.crt with IP's: []
	I0219 04:36:06.930006   11220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.crt ...
	I0219 04:36:06.930006   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.crt: {Name:mk9ca54252595f9ab11c6c82f374c57d36342abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:06.931022   11220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.key ...
	I0219 04:36:06.931022   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\client.key: {Name:mk0e4414a5871d3b645e354d2366f0664ccca23b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:06.932025   11220 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387
	I0219 04:36:06.932025   11220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387 with IP's: [172.28.246.85 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:36:03.123286    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:03.123286    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:03.123286    1628 main.go:141] libmachine: Waiting for host to start...
	I0219 04:36:03.123286    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:03.910176    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:03.910176    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:03.910176    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:05.010810    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:05.010810    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:06.012691    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:06.801290    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:06.801472    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:06.801472    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:07.917224   11220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387 ...
	I0219 04:36:07.917224   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387: {Name:mk93509684f4a9d638e1cf43deec82004cfa1638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:07.919346   11220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387 ...
	I0219 04:36:07.919346   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387: {Name:mk2c24ed1b4c1a4bb01d3cec89b4631b39d8ef05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:07.920904   11220 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt.3e6ff387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt
	I0219 04:36:07.927475   11220 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key.3e6ff387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key
	I0219 04:36:07.930819   11220 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key
	I0219 04:36:07.931814   11220 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt with IP's: []
	I0219 04:36:08.113069   11220 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt ...
	I0219 04:36:08.113069   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt: {Name:mk1eff50f027c2b1736311380e60653f8bfc71fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:08.114050   11220 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key ...
	I0219 04:36:08.114050   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key: {Name:mk589158bee47663d12b4833876cb313f0d0cd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:08.123091   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:36:08.123091   11220 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:36:08.123091   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:36:08.123091   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:36:08.124058   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:36:08.124058   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:36:08.124058   11220 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:36:08.126057   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:36:08.176135   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:36:08.217781   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:36:08.260107   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\offline-docker-928900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0219 04:36:08.300771   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:36:08.339199   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:36:08.381665   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:36:08.427541   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:36:08.472155   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:36:08.511681   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:36:08.552763   11220 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:36:08.597594   11220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:36:08.636207   11220 ssh_runner.go:195] Run: openssl version
	I0219 04:36:08.656811   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:36:08.685799   11220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:36:08.692398   11220 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:36:08.705378   11220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:36:08.724400   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:36:08.763513   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:36:08.791129   11220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:36:08.797529   11220 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:36:08.805496   11220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:36:08.824303   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:36:08.853165   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:36:08.879761   11220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:36:08.886311   11220 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:36:08.896968   11220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:36:08.915134   11220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:36:08.940900   11220 kubeadm.go:401] StartCluster: {Name:offline-docker-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.26.1 ClusterName:offline-docker-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.85 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:36:08.953766   11220 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:36:09.001977   11220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:36:09.031450   11220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:36:09.074743   11220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:36:09.096341   11220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:36:09.096429   11220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:36:09.506617   11220 kubeadm.go:322] W0219 04:36:09.484983    1499 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:36:10.493226   11220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:36:07.850666    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:07.850666    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:08.854763    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:09.670468    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:09.670468    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:09.670468    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:10.967024    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:10.967024    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:11.981169    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:12.847905    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:12.847905    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:12.847905    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:14.205597    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:14.205597    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:15.217915    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:15.967523    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:15.967523    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:15.967523    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:17.064839    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:17.064839    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:18.064994    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:18.842019    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:18.842019    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:18.842019    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:19.933285    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:19.933285    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:20.934998    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:21.726156    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:21.726321    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:21.726321    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:22.802749    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:22.803031    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:23.818547    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:24.595861    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:24.595861    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:24.595861    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:25.671428    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:25.671428    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:26.676626    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:27.498068    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:27.498257    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:27.498341    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:31.465750   11220 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0219 04:36:31.465750   11220 kubeadm.go:322] [preflight] Running pre-flight checks
	I0219 04:36:31.466473   11220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:36:31.466660   11220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:36:31.466905   11220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:36:31.467036   11220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:36:31.469558   11220 out.go:204]   - Generating certificates and keys ...
	I0219 04:36:31.469638   11220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0219 04:36:31.469638   11220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0219 04:36:31.470163   11220 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:36:31.470408   11220 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:36:31.470734   11220 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0219 04:36:31.470734   11220 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0219 04:36:31.470734   11220 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0219 04:36:31.471331   11220 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost offline-docker-928900] and IPs [172.28.246.85 127.0.0.1 ::1]
	I0219 04:36:31.471331   11220 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0219 04:36:31.472247   11220 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost offline-docker-928900] and IPs [172.28.246.85 127.0.0.1 ::1]
	I0219 04:36:31.472594   11220 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:36:31.472916   11220 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:36:31.473199   11220 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0219 04:36:31.473470   11220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:36:31.473523   11220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:36:31.473825   11220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:36:31.474089   11220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:36:31.474372   11220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:36:31.474794   11220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:36:31.474869   11220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:36:31.474869   11220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0219 04:36:31.475400   11220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:36:31.479149   11220 out.go:204]   - Booting up control plane ...
	I0219 04:36:31.479814   11220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:36:31.480073   11220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:36:31.480401   11220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:36:31.480401   11220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:36:31.481159   11220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:36:31.481159   11220 kubeadm.go:322] [apiclient] All control plane components are healthy after 15.007700 seconds
	I0219 04:36:31.481800   11220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:36:31.482055   11220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:36:31.482055   11220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:36:31.482811   11220 kubeadm.go:322] [mark-control-plane] Marking the node offline-docker-928900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:36:31.482811   11220 kubeadm.go:322] [bootstrap-token] Using token: wvfirw.1xer41cq7lm85lnh
	I0219 04:36:31.489756   11220 out.go:204]   - Configuring RBAC rules ...
	I0219 04:36:31.490751   11220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:36:31.490751   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:36:31.490751   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:36:31.491748   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:36:31.491748   11220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:36:31.492832   11220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:36:31.492832   11220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:36:31.492832   11220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0219 04:36:31.493750   11220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0219 04:36:31.493750   11220 kubeadm.go:322] 
	I0219 04:36:31.493750   11220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0219 04:36:31.493750   11220 kubeadm.go:322] 
	I0219 04:36:31.493750   11220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0219 04:36:31.493750   11220 kubeadm.go:322] 
	I0219 04:36:31.493750   11220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0219 04:36:31.494751   11220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:36:31.494751   11220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:36:31.494751   11220 kubeadm.go:322] 
	I0219 04:36:31.494751   11220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0219 04:36:31.494751   11220 kubeadm.go:322] 
	I0219 04:36:31.495768   11220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:36:31.495768   11220 kubeadm.go:322] 
	I0219 04:36:31.495768   11220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0219 04:36:31.495768   11220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:36:31.495768   11220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:36:31.495768   11220 kubeadm.go:322] 
	I0219 04:36:31.496785   11220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:36:31.496785   11220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0219 04:36:31.496785   11220 kubeadm.go:322] 
	I0219 04:36:31.496785   11220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wvfirw.1xer41cq7lm85lnh \
	I0219 04:36:31.497754   11220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:36:31.497754   11220 kubeadm.go:322] 	--control-plane 
	I0219 04:36:31.497754   11220 kubeadm.go:322] 
	I0219 04:36:31.497754   11220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:36:31.497754   11220 kubeadm.go:322] 
	I0219 04:36:31.497754   11220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wvfirw.1xer41cq7lm85lnh \
	I0219 04:36:31.498763   11220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:36:31.498763   11220 cni.go:84] Creating CNI manager for ""
	I0219 04:36:31.498763   11220 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:36:31.506855   11220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:36:31.522055   11220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:36:31.540268   11220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:36:31.572420   11220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:36:31.585992   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61 minikube.k8s.io/name=offline-docker-928900 minikube.k8s.io/updated_at=2023_02_19T04_36_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:31.590822   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:31.672376   11220 ops.go:34] apiserver oom_adj: -16
	I0219 04:36:28.606489    1628 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:36:28.606489    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:29.609763    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:30.415509    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:30.415509    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:30.415509    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:31.600639    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:31.600639    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:31.600705    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:32.455225    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:32.455225    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:32.455225    1628 machine.go:88] provisioning docker machine ...
	I0219 04:36:32.455225    1628 buildroot.go:166] provisioning hostname "NoKubernetes-928900"
	I0219 04:36:32.457454    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:32.439818   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:33.131395   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:33.620714   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:34.134628   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:34.621604   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:35.135732   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:35.626092   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:36.129272   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:36.634957   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:37.122722   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:33.294759    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:33.294759    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:33.294759    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:34.391277    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:34.391277    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:34.394279    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:34.403631    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:34.403631    1628 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-928900 && echo "NoKubernetes-928900" | sudo tee /etc/hostname
	I0219 04:36:34.580535    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-928900
	
	I0219 04:36:34.580535    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:35.374253    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:35.374253    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:35.374253    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:36.480831    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:36.480960    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:36.485981    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:36.486627    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:36.486627    1628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-928900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-928900/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-928900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:36:36.648991    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:36:36.648991    1628 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:36:36.648991    1628 buildroot.go:174] setting up certificates
	I0219 04:36:36.648991    1628 provision.go:83] configureAuth start
	I0219 04:36:36.648991    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:37.413706    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:37.413706    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:37.413881    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:37.620947   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:38.129032   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:38.637292   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:39.123831   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:39.627231   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:40.136230   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:40.622407   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:41.129420   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:41.625645   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:42.128795   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:38.560772    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:38.560772    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:38.560772    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:39.351933    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:39.351933    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:39.352303    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:40.455817    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:40.455817    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:40.455817    1628 provision.go:138] copyHostCerts
	I0219 04:36:40.455817    1628 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:36:40.455817    1628 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:36:40.456603    1628 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:36:40.459691    1628 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:36:40.459691    1628 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:36:40.460060    1628 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:36:40.460704    1628 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:36:40.460704    1628 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:36:40.460704    1628 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:36:40.462697    1628 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.NoKubernetes-928900 san=[172.28.255.137 172.28.255.137 localhost 127.0.0.1 minikube NoKubernetes-928900]
	I0219 04:36:40.782474    1628 provision.go:172] copyRemoteCerts
	I0219 04:36:40.792470    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:36:40.792470    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:41.580590    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:41.580590    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:41.580804    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:42.623149   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:43.126579   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:43.634544   11220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:36:43.893360   11220 kubeadm.go:1073] duration metric: took 12.3208473s to wait for elevateKubeSystemPrivileges.
	I0219 04:36:43.893360   11220 kubeadm.go:403] StartCluster complete in 34.9525774s
	I0219 04:36:43.893360   11220 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:43.893360   11220 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:36:43.895358   11220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:36:43.896386   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:36:43.896386   11220 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:36:43.896386   11220 addons.go:65] Setting storage-provisioner=true in profile "offline-docker-928900"
	I0219 04:36:43.896386   11220 addons.go:65] Setting default-storageclass=true in profile "offline-docker-928900"
	I0219 04:36:43.896386   11220 addons.go:227] Setting addon storage-provisioner=true in "offline-docker-928900"
	I0219 04:36:43.897368   11220 config.go:182] Loaded profile config "offline-docker-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:36:43.897368   11220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-docker-928900"
	I0219 04:36:43.897368   11220 host.go:66] Checking if "offline-docker-928900" exists ...
	I0219 04:36:43.897368   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:43.899314   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:43.906164   11220 kapi.go:59] client config for offline-docker-928900: &rest.Config{Host:"https://172.28.246.85:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:36:44.397747   11220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0219 04:36:44.518124   11220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "offline-docker-928900" context rescaled to 1 replicas
	I0219 04:36:44.518124   11220 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.246.85 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:36:44.523483   11220 out.go:177] * Verifying Kubernetes components...
	I0219 04:36:44.536233   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:44.780622   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:44.784650   11220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:36:44.782634   11220 kapi.go:59] client config for offline-docker-928900: &rest.Config{Host:"https://172.28.246.85:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:36:44.787632   11220 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:36:44.787632   11220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0219 04:36:44.787632   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:44.793614   11220 addons.go:227] Setting addon default-storageclass=true in "offline-docker-928900"
	I0219 04:36:44.793614   11220 host.go:66] Checking if "offline-docker-928900" exists ...
	I0219 04:36:44.795617   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:45.590116   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:45.590223   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:45.590116   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:45.590223   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:45.590308   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:45.590450   11220 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0219 04:36:45.590450   11220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0219 04:36:45.590450   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-928900 ).state
	I0219 04:36:46.399579   11220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:46.399579   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:46.399579   11220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:46.525746   11220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.1269427s)
	I0219 04:36:46.525746   11220 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.9895196s)
	I0219 04:36:46.525746   11220 start.go:921] {"host.minikube.internal": 172.28.240.1} host record injected into CoreDNS's ConfigMap
	I0219 04:36:46.528082   11220 kapi.go:59] client config for offline-docker-928900: &rest.Config{Host:"https://172.28.246.85:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\offline-docker-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:36:46.529064   11220 node_ready.go:35] waiting up to 6m0s for node "offline-docker-928900" to be "Ready" ...
	I0219 04:36:46.544140   11220 node_ready.go:49] node "offline-docker-928900" has status "Ready":"True"
	I0219 04:36:46.544269   11220 node_ready.go:38] duration metric: took 15.205ms waiting for node "offline-docker-928900" to be "Ready" ...
	I0219 04:36:46.544353   11220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:36:46.565202   11220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-d9lnt" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:46.839991   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:36:46.839991   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:46.839991   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:36:47.009280   11220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:36:42.658229    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:42.658229    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:42.658763    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:36:42.769335    1628 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9768723s)
	I0219 04:36:42.770315    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:36:42.813550    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0219 04:36:42.858261    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 04:36:42.907878    1628 provision.go:86] duration metric: configureAuth took 6.2589075s
	I0219 04:36:42.907878    1628 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:36:42.908875    1628 config.go:182] Loaded profile config "NoKubernetes-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:36:42.908875    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:43.686664    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:43.686664    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:43.686759    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:44.907768    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:44.907768    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:44.913686    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:44.914692    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:44.914692    1628 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:36:45.073886    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:36:45.073958    1628 buildroot.go:70] root file system type: tmpfs
	I0219 04:36:45.074098    1628 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:36:45.074169    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:45.858071    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:45.858281    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:45.858281    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:47.076345    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:47.076345    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:47.080342    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:47.081379    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:47.081379    1628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:36:47.262674    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:36:47.262674    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:47.591246   11220 main.go:141] libmachine: [stdout =====>] : 172.28.246.85
	
	I0219 04:36:47.591246   11220 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:47.591246   11220 sshutil.go:53] new ssh client: &{IP:172.28.246.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\offline-docker-928900\id_rsa Username:docker}
	I0219 04:36:47.803357   11220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0219 04:36:48.102068   11220 pod_ready.go:92] pod "coredns-787d4945fb-d9lnt" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.102068   11220 pod_ready.go:81] duration metric: took 1.5368713s waiting for pod "coredns-787d4945fb-d9lnt" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.102068   11220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ltwwh" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.111096   11220 pod_ready.go:92] pod "coredns-787d4945fb-ltwwh" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.111096   11220 pod_ready.go:81] duration metric: took 9.0281ms waiting for pod "coredns-787d4945fb-ltwwh" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.111096   11220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.130834   11220 pod_ready.go:92] pod "etcd-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.130834   11220 pod_ready.go:81] duration metric: took 19.7382ms waiting for pod "etcd-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.130834   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.137739   11220 pod_ready.go:92] pod "kube-apiserver-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.137739   11220 pod_ready.go:81] duration metric: took 6.9052ms waiting for pod "kube-apiserver-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.137739   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.145479   11220 pod_ready.go:92] pod "kube-controller-manager-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.145479   11220 pod_ready.go:81] duration metric: took 7.7398ms waiting for pod "kube-controller-manager-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.145559   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zqzc7" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.370309   11220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0219 04:36:48.372825   11220 addons.go:492] enable addons completed in 4.4764542s: enabled=[storage-provisioner default-storageclass]
	I0219 04:36:48.535551   11220 pod_ready.go:92] pod "kube-proxy-zqzc7" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.535551   11220 pod_ready.go:81] duration metric: took 389.9933ms waiting for pod "kube-proxy-zqzc7" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.535551   11220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.953444   11220 pod_ready.go:92] pod "kube-scheduler-offline-docker-928900" in "kube-system" namespace has status "Ready":"True"
	I0219 04:36:48.953444   11220 pod_ready.go:81] duration metric: took 417.8945ms waiting for pod "kube-scheduler-offline-docker-928900" in "kube-system" namespace to be "Ready" ...
	I0219 04:36:48.953444   11220 pod_ready.go:38] duration metric: took 2.4090566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:36:48.953444   11220 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:36:48.963338   11220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:36:48.991941   11220 api_server.go:71] duration metric: took 4.473832s to wait for apiserver process to appear ...
	I0219 04:36:48.992061   11220 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:36:48.992061   11220 api_server.go:252] Checking apiserver healthz at https://172.28.246.85:8443/healthz ...
	I0219 04:36:49.001975   11220 api_server.go:278] https://172.28.246.85:8443/healthz returned 200:
	ok
	I0219 04:36:49.004699   11220 api_server.go:140] control plane version: v1.26.1
	I0219 04:36:49.004779   11220 api_server.go:130] duration metric: took 12.7177ms to wait for apiserver health ...
	I0219 04:36:49.004779   11220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:36:49.147993   11220 system_pods.go:59] 8 kube-system pods found
	I0219 04:36:49.147993   11220 system_pods.go:61] "coredns-787d4945fb-d9lnt" [fba5029c-6a1e-4867-96aa-38252b508dcd] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "coredns-787d4945fb-ltwwh" [bd9e7528-e4e0-455d-943d-9d30d2c4f86a] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "etcd-offline-docker-928900" [988e3e49-cef8-4af7-9d68-ebfaa37fcddd] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-apiserver-offline-docker-928900" [8d372eea-8522-43a1-b53d-242e612d7574] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-controller-manager-offline-docker-928900" [73b4425d-b1c2-4191-b12c-22a69cfbfe7c] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-proxy-zqzc7" [67168779-5ccc-4fdc-be85-5e920523a686] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "kube-scheduler-offline-docker-928900" [c2354ae5-928e-448c-b4cd-6c850d4431c8] Running
	I0219 04:36:49.147993   11220 system_pods.go:61] "storage-provisioner" [2bc544d2-4233-4227-8e89-d1a6dc59d23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0219 04:36:49.147993   11220 system_pods.go:74] duration metric: took 143.2151ms to wait for pod list to return data ...
	I0219 04:36:49.147993   11220 default_sa.go:34] waiting for default service account to be created ...
	I0219 04:36:49.339376   11220 default_sa.go:45] found service account: "default"
	I0219 04:36:49.339376   11220 default_sa.go:55] duration metric: took 191.3834ms for default service account to be created ...
	I0219 04:36:49.339376   11220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0219 04:36:49.549170   11220 system_pods.go:86] 8 kube-system pods found
	I0219 04:36:49.549170   11220 system_pods.go:89] "coredns-787d4945fb-d9lnt" [fba5029c-6a1e-4867-96aa-38252b508dcd] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "coredns-787d4945fb-ltwwh" [bd9e7528-e4e0-455d-943d-9d30d2c4f86a] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "etcd-offline-docker-928900" [988e3e49-cef8-4af7-9d68-ebfaa37fcddd] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-apiserver-offline-docker-928900" [8d372eea-8522-43a1-b53d-242e612d7574] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-controller-manager-offline-docker-928900" [73b4425d-b1c2-4191-b12c-22a69cfbfe7c] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-proxy-zqzc7" [67168779-5ccc-4fdc-be85-5e920523a686] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "kube-scheduler-offline-docker-928900" [c2354ae5-928e-448c-b4cd-6c850d4431c8] Running
	I0219 04:36:49.549170   11220 system_pods.go:89] "storage-provisioner" [2bc544d2-4233-4227-8e89-d1a6dc59d23d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0219 04:36:49.549170   11220 system_pods.go:126] duration metric: took 209.7943ms to wait for k8s-apps to be running ...
	I0219 04:36:49.549170   11220 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:36:49.559153   11220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:36:49.587740   11220 system_svc.go:56] duration metric: took 38.5708ms WaitForService to wait for kubelet.
	I0219 04:36:49.587740   11220 kubeadm.go:578] duration metric: took 5.0696334s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:36:49.587740   11220 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:36:49.734758   11220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:36:49.734852   11220 node_conditions.go:123] node cpu capacity is 2
	I0219 04:36:49.734852   11220 node_conditions.go:105] duration metric: took 147.1123ms to run NodePressure ...
	I0219 04:36:49.734852   11220 start.go:228] waiting for startup goroutines ...
	I0219 04:36:49.734852   11220 start.go:233] waiting for cluster config update ...
	I0219 04:36:49.734951   11220 start.go:242] writing updated cluster config ...
	I0219 04:36:49.744234   11220 ssh_runner.go:195] Run: rm -f paused
	I0219 04:36:49.949509   11220 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:36:49.992031   11220 out.go:177] 
	W0219 04:36:49.995998   11220 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:36:50.000021   11220 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:36:50.006022   11220 out.go:177] * Done! kubectl is now configured to use "offline-docker-928900" cluster and "default" namespace by default
	I0219 04:36:48.071182    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:48.071356    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:48.071388    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:49.194843    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:49.194843    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:49.199841    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:49.200527    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:49.200527    1628 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:36:50.486710    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:36:50.487666    1628 machine.go:91] provisioned docker machine in 18.0325024s
	I0219 04:36:50.487666    1628 client.go:171] LocalClient.Create took 1m19.7895518s
	I0219 04:36:50.487666    1628 start.go:167] duration metric: libmachine.API.Create for "NoKubernetes-928900" took 1m19.7895518s
	I0219 04:36:50.487666    1628 start.go:300] post-start starting for "NoKubernetes-928900" (driver="hyperv")
	I0219 04:36:50.487666    1628 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:36:50.495660    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:36:50.495660    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:51.271120    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:51.271120    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:51.271304    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:52.439368    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:52.439368    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:52.440175    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:36:52.552168    1628 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0564221s)
	I0219 04:36:52.561959    1628 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:36:52.569342    1628 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:36:52.569342    1628 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:36:52.569747    1628 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:36:52.570760    1628 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:36:52.580680    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:36:52.598251    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:36:56.647707    8340 start.go:368] acquired machines lock for "kubernetes-upgrade-803700" in 3m30.139618s
	I0219 04:36:56.648025    8340 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-803700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-803700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:36:56.648386    8340 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 04:36:52.646421    1628 start.go:303] post-start completed in 2.1587622s
	I0219 04:36:52.655267    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:53.486600    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:53.486600    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:53.486600    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:54.596026    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:54.596026    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:54.596026    1628 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\config.json ...
	I0219 04:36:54.598669    1628 start.go:128] duration metric: createHost completed in 1m23.9051162s
	I0219 04:36:54.598669    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:55.373522    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:55.373522    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:55.373522    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:56.499574    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:56.499618    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:56.504920    1628 main.go:141] libmachine: Using SSH client type: native
	I0219 04:36:56.505603    1628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.255.137 22 <nil> <nil>}
	I0219 04:36:56.505603    1628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:36:56.647079    1628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781416.637518960
	
	I0219 04:36:56.647079    1628 fix.go:207] guest clock: 1676781416.637518960
	I0219 04:36:56.647079    1628 fix.go:220] Guest: 2023-02-19 04:36:56.63751896 +0000 GMT Remote: 2023-02-19 04:36:54.598669 +0000 GMT m=+227.173683401 (delta=2.03884996s)
	I0219 04:36:56.647079    1628 fix.go:191] guest clock delta is within tolerance: 2.03884996s
	I0219 04:36:56.647079    1628 start.go:83] releasing machines lock for "NoKubernetes-928900", held for 1m25.9538003s
	I0219 04:36:56.647618    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:57.464036    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:57.464036    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:57.464036    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:56.651190    8340 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0219 04:36:56.651885    8340 start.go:159] libmachine.API.Create for "kubernetes-upgrade-803700" (driver="hyperv")
	I0219 04:36:56.651885    8340 client.go:168] LocalClient.Create starting
	I0219 04:36:56.652541    8340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:36:56.652541    8340 main.go:141] libmachine: Decoding PEM data...
	I0219 04:36:56.652541    8340 main.go:141] libmachine: Parsing certificate...
	I0219 04:36:56.653161    8340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:36:56.653462    8340 main.go:141] libmachine: Decoding PEM data...
	I0219 04:36:56.653534    8340 main.go:141] libmachine: Parsing certificate...
	I0219 04:36:56.653765    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:36:57.099213    8340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:36:57.099213    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:57.099213    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:36:57.924494    8340 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:36:57.924494    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:57.924494    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:36:58.649696    8340 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:36:58.649750    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:58.649750    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:36:58.664982    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:36:58.664982    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:58.668677    1628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:36:58.668677    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:58.684844    1628 ssh_runner.go:195] Run: cat /version.json
	I0219 04:36:58.685045    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:36:59.487691    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:59.487691    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:59.487894    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:36:59.518357    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:36:59.518536    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:36:59.518536    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:00.648142    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:37:00.648142    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:00.648142    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:37:00.692089    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:37:00.692089    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:00.692089    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:37:00.804310    1628 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1356402s)
	I0219 04:37:00.805186    1628 ssh_runner.go:235] Completed: cat /version.json: (2.1194733s)
	I0219 04:37:00.815215    1628 ssh_runner.go:195] Run: systemctl --version
	I0219 04:37:00.834234    1628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:37:00.842204    1628 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:37:00.853208    1628 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:37:00.870214    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:37:00.885206    1628 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:37:00.926305    1628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:37:00.952925    1628 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:37:00.952925    1628 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:37:00.961040    1628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:37:00.993350    1628 docker.go:630] Got preloaded images: 
	I0219 04:37:00.993350    1628 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:37:01.003872    1628 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:37:01.040226    1628 ssh_runner.go:195] Run: which lz4
	I0219 04:37:01.058591    1628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:37:01.065182    1628 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:37:01.065182    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:37:00.345219    8340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:37:00.345296    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:00.348335    8340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:37:00.794119    8340 main.go:141] libmachine: Creating SSH key...
	I0219 04:37:01.187364    8340 main.go:141] libmachine: Creating VM...
	I0219 04:37:01.187364    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:37:02.923434    8340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:37:02.923434    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:02.923434    8340 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:37:02.923434    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:37:03.697060    8340 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:37:03.697271    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:03.697271    8340 main.go:141] libmachine: Creating VHD
	I0219 04:37:03.697271    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:37:03.468097    1628 docker.go:594] Took 2.421331 seconds to copy over tarball
	I0219 04:37:03.480708    1628 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:37:05.457567    8340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 975DD022-8CE0-4848-BAEA-C5005FE04769
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:37:05.457648    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:05.457648    8340 main.go:141] libmachine: Writing magic tar header
	I0219 04:37:05.457795    8340 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:37:05.465621    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0219 04:37:07.221221    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:07.221309    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:07.221309    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\disk.vhd' -SizeBytes 20000MB
	I0219 04:37:08.586944    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:08.586944    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:08.586944    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kubernetes-upgrade-803700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0219 04:37:09.050270    1628 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.5695807s)
	I0219 04:37:09.050270    1628 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:37:09.121999    1628 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:37:09.140548    1628 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:37:09.188599    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:09.367337    1628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:37:17.064405    8340 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	kubernetes-upgrade-803700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0219 04:37:17.064655    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:17.064655    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kubernetes-upgrade-803700 -DynamicMemoryEnabled $false
	I0219 04:37:19.543677    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:19.543677    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:19.543677    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kubernetes-upgrade-803700 -Count 2
	I0219 04:37:20.880531    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:20.880739    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:20.880739    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kubernetes-upgrade-803700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\boot2docker.iso'
	I0219 04:37:22.644910    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:22.644969    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:22.645108    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kubernetes-upgrade-803700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\disk.vhd'
	I0219 04:37:24.659450    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:24.659450    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:24.659450    8340 main.go:141] libmachine: Starting VM...
	I0219 04:37:24.659450    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-803700
	I0219 04:37:24.628591    1628 ssh_runner.go:235] Completed: sudo systemctl restart docker: (15.2613054s)
	I0219 04:37:24.628591    1628 start.go:485] detecting cgroup driver to use...
	I0219 04:37:24.629213    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:37:24.674895    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:37:24.701527    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:37:24.720798    1628 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:37:24.731091    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:37:24.771408    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:37:24.797201    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:37:24.824448    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:37:24.851926    1628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:37:24.888791    1628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:37:24.923523    1628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:37:24.951751    1628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:37:24.978693    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:25.162617    1628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:37:25.189664    1628 start.go:485] detecting cgroup driver to use...
	I0219 04:37:25.202622    1628 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:37:25.230484    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:37:25.261282    1628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:37:25.555559    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:37:25.591953    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:37:25.625610    1628 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:37:26.149198    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:37:26.170790    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:37:26.212155    1628 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:37:26.404697    1628 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:37:26.558780    1628 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:37:26.558780    1628 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:37:26.599150    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:26.766950    1628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:37:30.724642    1628 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.9576501s)
	I0219 04:37:30.735848    1628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:37:30.922242    1628 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:37:31.113265    1628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:37:31.299060    1628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:37:31.501888    1628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:37:31.528489    1628 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:37:31.539479    1628 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:37:31.548961    1628 start.go:553] Will wait 60s for crictl version
	I0219 04:37:31.562440    1628 ssh_runner.go:195] Run: which crictl
	I0219 04:37:31.579548    1628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:37:31.727273    1628 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:37:31.739311    1628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:37:31.799700    1628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:37:31.898372    1628 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:37:31.898612    1628 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:37:31.906721    1628 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:37:31.910181    1628 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:37:31.910181    1628 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:37:31.919952    1628 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:37:31.931074    1628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:37:31.953465    1628 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:37:31.963136    1628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:37:31.999666    1628 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:37:31.999666    1628 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:37:32.009899    1628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:37:32.056067    1628 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:37:32.056067    1628 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:37:32.067744    1628 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:37:32.115770    1628 cni.go:84] Creating CNI manager for ""
	I0219 04:37:32.115770    1628 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:37:32.115770    1628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:37:32.115770    1628 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.255.137 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-928900 NodeName:NoKubernetes-928900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.255.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.255.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:37:32.116511    1628 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.255.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "NoKubernetes-928900"
	  kubeletExtraArgs:
	    node-ip: 172.28.255.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.255.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:37:32.116627    1628 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=NoKubernetes-928900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.255.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:NoKubernetes-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:37:32.127400    1628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:37:32.145360    1628 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:37:32.158544    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:37:32.174355    1628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0219 04:37:32.209372    1628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:37:32.248362    1628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0219 04:37:32.289839    1628 ssh_runner.go:195] Run: grep 172.28.255.137	control-plane.minikube.internal$ /etc/hosts
	I0219 04:37:32.297539    1628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.255.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:37:32.318400    1628 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900 for IP: 172.28.255.137
	I0219 04:37:32.318400    1628 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.319163    1628 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:37:32.319641    1628 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:37:32.320513    1628 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.key
	I0219 04:37:32.320621    1628 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.crt with IP's: []
	I0219 04:37:32.464269    1628 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.crt ...
	I0219 04:37:32.464269    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.crt: {Name:mk6cc113d2a062338f6e681513431cb781d6a7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.465264    1628 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.key ...
	I0219 04:37:32.465264    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\client.key: {Name:mkef6e93b430868dc5093548d33ca3e4d0a289fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.466261    1628 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9
	I0219 04:37:32.466261    1628 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9 with IP's: [172.28.255.137 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:37:32.629628    1628 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9 ...
	I0219 04:37:32.629628    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9: {Name:mkf98e0a330613f0be8480d7b68a27359b8057b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.630594    1628 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9 ...
	I0219 04:37:32.630594    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9: {Name:mkd0c6c99b6db8f496a233c0df63fb6a8948c44c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.631606    1628 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt.a0cadba9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt
	I0219 04:37:32.639615    1628 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key.a0cadba9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key
	I0219 04:37:32.640585    1628 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key
	I0219 04:37:32.640585    1628 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt with IP's: []
	I0219 04:37:30.592066    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:30.592066    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:30.592138    8340 main.go:141] libmachine: Waiting for host to start...
	I0219 04:37:30.592138    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:31.382108    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:31.382108    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:31.382108    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:32.531399    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:32.531764    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:33.537663    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:34.342583    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:34.342633    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:34.344808    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:32.934797    1628 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt ...
	I0219 04:37:32.934797    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt: {Name:mkb4c19d7e11497f37a890f0a667d7636568d7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.935748    1628 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key ...
	I0219 04:37:32.935748    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key: {Name:mk4e87d34b0ecc25384ba56b30764edd79efb68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:32.945823    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:37:32.945823    1628 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:37:32.945823    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:37:32.946751    1628 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:37:32.948771    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:37:32.994067    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:37:33.035284    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:37:33.076545    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-928900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0219 04:37:33.117811    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:37:33.156456    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:37:33.203567    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:37:33.243851    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:37:33.281275    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:37:33.320512    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:37:33.361082    1628 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:37:33.405874    1628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:37:33.448887    1628 ssh_runner.go:195] Run: openssl version
	I0219 04:37:33.468712    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:37:33.500769    1628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:37:33.508218    1628 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:37:33.518401    1628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:37:33.535848    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:37:33.574360    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:37:33.606715    1628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:37:33.613571    1628 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:37:33.625225    1628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:37:33.647837    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:37:33.680779    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:37:33.709909    1628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:37:33.716302    1628 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:37:33.725462    1628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:37:33.744119    1628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:37:33.765068    1628 kubeadm.go:401] StartCluster: {Name:NoKubernetes-928900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.26.1 ClusterName:NoKubernetes-928900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.255.137 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:37:33.773417    1628 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:37:33.818579    1628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:37:33.846547    1628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:37:33.872896    1628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:37:33.888442    1628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:37:33.888545    1628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:37:33.974794    1628 kubeadm.go:322] W0219 04:37:33.960482    1497 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:37:34.213097    1628 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:37:35.455602    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:35.455602    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:36.469508    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:37.248338    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:37.248338    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:37.248338    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:38.387577    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:38.387630    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:39.389005    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:40.215420    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:40.215420    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:40.215420    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:41.338044    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:41.338150    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:42.339649    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:43.144904    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:43.144904    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:43.145155    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:44.256294    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:44.256448    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:45.260435    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:46.067751    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:46.067869    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:46.067919    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:47.178901    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:47.178901    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:48.191176    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:48.994563    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:48.994563    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:48.994820    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:50.099723    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:50.099779    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:51.102150    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:51.910343    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:51.910645    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:51.910722    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:53.078829    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:53.078906    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:54.079708    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:55.547371    1628 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0219 04:37:55.547371    1628 kubeadm.go:322] [preflight] Running pre-flight checks
	I0219 04:37:55.547926    1628 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0219 04:37:55.548081    1628 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0219 04:37:55.548392    1628 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0219 04:37:55.548569    1628 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0219 04:37:55.555823    1628 out.go:204]   - Generating certificates and keys ...
	I0219 04:37:55.556044    1628 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0219 04:37:55.556044    1628 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0219 04:37:55.556602    1628 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0219 04:37:55.556742    1628 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0219 04:37:55.556796    1628 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0219 04:37:55.556796    1628 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0219 04:37:55.556796    1628 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0219 04:37:55.557667    1628 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost nokubernetes-928900] and IPs [172.28.255.137 127.0.0.1 ::1]
	I0219 04:37:55.557667    1628 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0219 04:37:55.558319    1628 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost nokubernetes-928900] and IPs [172.28.255.137 127.0.0.1 ::1]
	I0219 04:37:55.558477    1628 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0219 04:37:55.558477    1628 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0219 04:37:55.559038    1628 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0219 04:37:55.559253    1628 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0219 04:37:55.559499    1628 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0219 04:37:55.559632    1628 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0219 04:37:55.559632    1628 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0219 04:37:55.559632    1628 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0219 04:37:55.560316    1628 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0219 04:37:55.560364    1628 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0219 04:37:55.560364    1628 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0219 04:37:55.560364    1628 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0219 04:37:55.563554    1628 out.go:204]   - Booting up control plane ...
	I0219 04:37:55.563554    1628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0219 04:37:55.563554    1628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0219 04:37:55.564376    1628 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0219 04:37:55.564818    1628 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0219 04:37:55.565186    1628 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0219 04:37:55.565813    1628 kubeadm.go:322] [apiclient] All control plane components are healthy after 15.506139 seconds
	I0219 04:37:55.566093    1628 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0219 04:37:55.566449    1628 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0219 04:37:55.566717    1628 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0219 04:37:55.566824    1628 kubeadm.go:322] [mark-control-plane] Marking the node nokubernetes-928900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0219 04:37:55.566824    1628 kubeadm.go:322] [bootstrap-token] Using token: jt4lcw.y2grqfpwjrsmf3rf
	I0219 04:37:55.570819    1628 out.go:204]   - Configuring RBAC rules ...
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0219 04:37:55.570819    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0219 04:37:55.572037    1628 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0219 04:37:55.572074    1628 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0219 04:37:55.572074    1628 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0219 04:37:55.572074    1628 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0219 04:37:55.572074    1628 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0219 04:37:55.572074    1628 kubeadm.go:322] 
	I0219 04:37:55.572074    1628 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0219 04:37:55.572074    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0219 04:37:55.573083    1628 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0219 04:37:55.573083    1628 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0219 04:37:55.573083    1628 kubeadm.go:322] 
	I0219 04:37:55.573083    1628 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0219 04:37:55.574057    1628 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0219 04:37:55.574057    1628 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0219 04:37:55.574057    1628 kubeadm.go:322] 
	I0219 04:37:55.574057    1628 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0219 04:37:55.574057    1628 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0219 04:37:55.574057    1628 kubeadm.go:322] 
	I0219 04:37:55.574057    1628 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jt4lcw.y2grqfpwjrsmf3rf \
	I0219 04:37:55.575063    1628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 \
	I0219 04:37:55.575063    1628 kubeadm.go:322] 	--control-plane 
	I0219 04:37:55.575063    1628 kubeadm.go:322] 
	I0219 04:37:55.575063    1628 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0219 04:37:55.575063    1628 kubeadm.go:322] 
	I0219 04:37:55.575063    1628 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jt4lcw.y2grqfpwjrsmf3rf \
	I0219 04:37:55.575063    1628 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:336da9b9abccf29168ce831cea4353693e9f892a0f3fd4c13af6595c2b5efef1 
	I0219 04:37:55.576073    1628 cni.go:84] Creating CNI manager for ""
	I0219 04:37:55.576073    1628 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:37:55.581257    1628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:37:55.592838    1628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:37:55.610753    1628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:37:55.680443    1628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:37:55.693121    1628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:37:55.693121    1628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61 minikube.k8s.io/name=NoKubernetes-928900 minikube.k8s.io/updated_at=2023_02_19T04_37_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0219 04:37:55.781166    1628 ops.go:34] apiserver oom_adj: -16
	I0219 04:37:56.172163    1628 kubeadm.go:1073] duration metric: took 491.668ms to wait for elevateKubeSystemPrivileges.
	I0219 04:37:56.281141    1628 kubeadm.go:403] StartCluster complete in 22.5161496s
	I0219 04:37:56.281195    1628 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:56.281385    1628 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:37:56.282727    1628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:37:56.284060    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:37:56.284060    1628 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:37:56.284060    1628 addons.go:65] Setting storage-provisioner=true in profile "NoKubernetes-928900"
	I0219 04:37:56.284060    1628 addons.go:227] Setting addon storage-provisioner=true in "NoKubernetes-928900"
	I0219 04:37:56.284619    1628 addons.go:65] Setting default-storageclass=true in profile "NoKubernetes-928900"
	I0219 04:37:56.284692    1628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "NoKubernetes-928900"
	I0219 04:37:56.284692    1628 host.go:66] Checking if "NoKubernetes-928900" exists ...
	I0219 04:37:56.284769    1628 config.go:182] Loaded profile config "NoKubernetes-928900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:37:56.285492    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:56.286268    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:56.525686    1628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0219 04:37:56.878661    1628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "NoKubernetes-928900" context rescaled to 1 replicas
	I0219 04:37:56.878711    1628 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.255.137 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:37:56.882974    1628 out.go:177] * Verifying Kubernetes components...
	I0219 04:37:56.900866    1628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:37:57.118198    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:57.118198    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:57.127493    1628 addons.go:227] Setting addon default-storageclass=true in "NoKubernetes-928900"
	I0219 04:37:57.127493    1628 host.go:66] Checking if "NoKubernetes-928900" exists ...
	I0219 04:37:57.131747    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:57.140786    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:57.140786    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:57.143847    1628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0219 04:37:57.146838    1628 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:37:57.146838    1628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0219 04:37:57.146838    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:54.905229    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:54.905288    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:54.905288    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:56.071828    8340 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:37:56.072099    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:57.086375    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:58.034558    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:58.034558    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:58.034717    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:59.475309    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:37:59.475309    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:59.475309    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:58.159525    1628 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0219 04:37:58.159525    1628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0219 04:37:58.159525    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-928900 ).state
	I0219 04:37:58.316353    1628 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.240.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.790673s)
	I0219 04:37:58.316353    1628 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4154917s)
	I0219 04:37:58.316353    1628 start.go:921] {"host.minikube.internal": 172.28.240.1} host record injected into CoreDNS's ConfigMap
	I0219 04:37:58.319366    1628 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:37:58.334342    1628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:37:58.371426    1628 api_server.go:71] duration metric: took 1.4926014s to wait for apiserver process to appear ...
	I0219 04:37:58.371426    1628 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:37:58.371426    1628 api_server.go:252] Checking apiserver healthz at https://172.28.255.137:8443/healthz ...
	I0219 04:37:58.385424    1628 api_server.go:278] https://172.28.255.137:8443/healthz returned 200:
	ok
	I0219 04:37:58.387811    1628 api_server.go:140] control plane version: v1.26.1
	I0219 04:37:58.387811    1628 api_server.go:130] duration metric: took 16.3846ms to wait for apiserver health ...
	I0219 04:37:58.387811    1628 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:37:58.398646    1628 system_pods.go:59] 4 kube-system pods found
	I0219 04:37:58.398646    1628 system_pods.go:61] "etcd-nokubernetes-928900" [da445a25-54e8-49f9-a05c-fbec08f3c301] Running
	I0219 04:37:58.398646    1628 system_pods.go:61] "kube-apiserver-nokubernetes-928900" [ac2e3c35-608b-4b25-bf70-ea5a9ecad8ec] Pending
	I0219 04:37:58.398646    1628 system_pods.go:61] "kube-controller-manager-nokubernetes-928900" [c57893c8-3803-4d5f-8687-7b92d776c31a] Pending
	I0219 04:37:58.398646    1628 system_pods.go:61] "kube-scheduler-nokubernetes-928900" [9920b009-84bd-4d92-9cd5-b113c2d48396] Pending
	I0219 04:37:58.398646    1628 system_pods.go:74] duration metric: took 10.8353ms to wait for pod list to return data ...
	I0219 04:37:58.398646    1628 kubeadm.go:578] duration metric: took 1.5198213s to wait for : map[apiserver:true system_pods:true] ...
	I0219 04:37:58.398646    1628 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:37:58.404048    1628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:37:58.404048    1628 node_conditions.go:123] node cpu capacity is 2
	I0219 04:37:58.404136    1628 node_conditions.go:105] duration metric: took 5.4899ms to run NodePressure ...
	I0219 04:37:58.404136    1628 start.go:228] waiting for startup goroutines ...
	I0219 04:37:59.111470    1628 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:37:59.111686    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:59.111686    1628 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM NoKubernetes-928900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:37:59.599936    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:37:59.599936    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:37:59.599936    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:37:59.810595    1628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0219 04:38:00.609278    1628 main.go:141] libmachine: [stdout =====>] : 172.28.255.137
	
	I0219 04:38:00.609278    1628 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:00.609278    1628 sshutil.go:53] new ssh client: &{IP:172.28.255.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\NoKubernetes-928900\id_rsa Username:docker}
	I0219 04:38:00.766251    1628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0219 04:38:01.247500    1628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0219 04:38:01.250831    1628 addons.go:492] enable addons completed in 4.9667875s: enabled=[storage-provisioner default-storageclass]
	I0219 04:38:01.250831    1628 start.go:233] waiting for cluster config update ...
	I0219 04:38:01.250831    1628 start.go:242] writing updated cluster config ...
	I0219 04:38:01.274829    1628 ssh_runner.go:195] Run: rm -f paused
	I0219 04:38:01.523334    1628 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:38:01.525557    1628 out.go:177] 
	W0219 04:38:01.529133    1628 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:38:01.534844    1628 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:38:01.538275    1628 out.go:177] * Done! kubectl is now configured to use "NoKubernetes-928900" cluster and "default" namespace by default
	I0219 04:38:00.530599    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:00.530656    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:00.530716    8340 machine.go:88] provisioning docker machine ...
	I0219 04:38:00.530772    8340 buildroot.go:166] provisioning hostname "kubernetes-upgrade-803700"
	I0219 04:38:00.530994    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:01.476338    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:01.476338    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:01.476338    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:02.720759    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:02.720759    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:02.726537    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:02.727504    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:02.727698    8340 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-803700 && echo "kubernetes-upgrade-803700" | sudo tee /etc/hostname
	I0219 04:38:02.918570    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803700
	
	I0219 04:38:02.918725    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:03.860483    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:03.860558    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:03.860636    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:05.203366    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:05.203617    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:05.208662    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:05.209394    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:05.209394    8340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-803700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-803700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-803700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:38:05.382435    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:38:05.382435    8340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:38:05.382435    8340 buildroot.go:174] setting up certificates
	I0219 04:38:05.382435    8340 provision.go:83] configureAuth start
	I0219 04:38:05.382435    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:06.202331    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:06.202381    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:06.202480    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:07.370134    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:07.370320    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:07.370387    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:08.252434    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:08.252434    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:08.252434    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:09.501367    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:09.501367    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:09.501367    8340 provision.go:138] copyHostCerts
	I0219 04:38:09.501367    8340 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:38:09.501367    8340 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:38:09.502420    8340 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:38:09.504287    8340 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:38:09.504287    8340 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:38:09.504842    8340 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:38:09.506287    8340 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:38:09.506385    8340 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:38:09.506492    8340 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:38:09.508691    8340 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-803700 san=[172.28.248.155 172.28.248.155 localhost 127.0.0.1 minikube kubernetes-upgrade-803700]
	I0219 04:38:10.329534    8340 provision.go:172] copyRemoteCerts
	I0219 04:38:10.340600    8340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:38:10.340600    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:11.164501    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:11.164501    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:11.164501    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:12.401280    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:12.401280    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:12.401280    8340 sshutil.go:53] new ssh client: &{IP:172.28.248.155 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\id_rsa Username:docker}
	I0219 04:38:12.511522    8340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.1707796s)
	I0219 04:38:12.511522    8340 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:38:12.556529    8340 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0219 04:38:12.598772    8340 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 04:38:12.651717    8340 provision.go:86] duration metric: configureAuth took 7.2693065s
	I0219 04:38:12.651717    8340 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:38:12.652871    8340 config.go:182] Loaded profile config "kubernetes-upgrade-803700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0219 04:38:12.653041    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:13.575151    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:13.575351    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:13.575433    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:14.792499    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:14.792499    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:14.796844    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:14.797867    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:14.797919    8340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:38:14.959935    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:38:14.960503    8340 buildroot.go:70] root file system type: tmpfs
	I0219 04:38:14.960714    8340 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:38:14.960803    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:15.806374    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:15.806374    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:15.806760    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:16.971327    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:16.971378    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:16.975710    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:16.976326    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:16.976326    8340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:38:17.164950    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:38:17.164950    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:18.023635    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:18.023635    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:18.023635    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:19.399491    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:19.399491    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:19.405493    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:19.406495    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:19.406495    8340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:38:21.189284    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:38:21.189284    8340 machine.go:91] provisioned docker machine in 20.6586383s
	I0219 04:38:21.189284    8340 client.go:171] LocalClient.Create took 1m24.5376865s
	I0219 04:38:21.189284    8340 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-803700" took 1m24.5376865s
	I0219 04:38:21.189284    8340 start.go:300] post-start starting for "kubernetes-upgrade-803700" (driver="hyperv")
	I0219 04:38:21.189284    8340 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:38:21.202696    8340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:38:21.202696    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:22.011549    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:22.011549    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:22.011802    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:23.249449    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:23.249449    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:23.250146    8340 sshutil.go:53] new ssh client: &{IP:172.28.248.155 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\id_rsa Username:docker}
	I0219 04:38:23.382557    8340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.1798684s)
	I0219 04:38:23.394639    8340 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:38:23.401403    8340 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:38:23.401403    8340 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:38:23.401937    8340 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:38:23.402931    8340 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:38:23.415649    8340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:38:23.431645    8340 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:38:23.482858    8340 start.go:303] post-start completed in 2.2935818s
	I0219 04:38:23.486490    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:24.340508    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:24.340634    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:24.340710    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:25.514402    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:25.514467    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:25.514792    8340 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-803700\config.json ...
	I0219 04:38:25.517545    8340 start.go:128] duration metric: createHost completed in 1m28.8694607s
	I0219 04:38:25.517545    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:26.355145    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:26.355209    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:26.355307    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:38:27.508708    8340 main.go:141] libmachine: [stdout =====>] : 172.28.248.155
	
	I0219 04:38:27.508708    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:27.511713    8340 main.go:141] libmachine: Using SSH client type: native
	I0219 04:38:27.512729    8340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.155 22 <nil> <nil>}
	I0219 04:38:27.512729    8340 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:38:27.673615    8340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781507.673061000
	
	I0219 04:38:27.674171    8340 fix.go:207] guest clock: 1676781507.673061000
	I0219 04:38:27.674232    8340 fix.go:220] Guest: 2023-02-19 04:38:27.673061 +0000 GMT Remote: 2023-02-19 04:38:25.5175454 +0000 GMT m=+301.016899501 (delta=2.1555156s)
	I0219 04:38:27.674327    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:38:28.490659    8340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:38:28.490659    8340 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:38:28.490659    8340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-02-19 04:36:23 UTC, ends at Sun 2023-02-19 04:38:34 UTC. --
	Feb 19 04:37:45 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:37:45.684465536Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/db500d906c580115a4986a3716cad00f9f57413dac36dd37a9a4c643e6939da9 pid=2067 runtime=io.containerd.runc.v2
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771576308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771651009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771664809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.771919911Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6d75146e32ea3c26a5262ce325a5ad4fb10a96331a9e80bc79ed561325344020 pid=2971 runtime=io.containerd.runc.v2
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.789927823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.790089924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.790118424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.790700428Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2593c2cd0913c8204efb90c659a5c84b35dc6067512c0c6a18ee2dd6433747c9 pid=2981 runtime=io.containerd.runc.v2
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.818854504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.819462208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.819609209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:11 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:11.821728322Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f22645641fe3914cc9f61da31c443213dba270e6e5e9989f4b0cb6f52b96a23e pid=3006 runtime=io.containerd.runc.v2
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.394777943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.394921543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.394957344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:12 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:12.395193845Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1276cb3c8928d61f0079ea963da4db8c5c878610d1bfcfeab39768397f7310ef pid=3101 runtime=io.containerd.runc.v2
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160394888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160572389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160592389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.160981091Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6a8f65c7149c935d2c68cc1e114ccc237aa001c21c97eb6a986bd175cd926cbd pid=3252 runtime=io.containerd.runc.v2
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.449174306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.449321507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.449344807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:38:13 NoKubernetes-928900 dockerd[1157]: time="2023-02-19T04:38:13.450335313Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8405542e120b22922cbba6af67964f8ed8b360675547a1827789be92c5cbcf9a pid=3328 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8405542e120b2       6e38f40d628db       21 seconds ago      Running             storage-provisioner       0                   2593c2cd0913c
	6a8f65c7149c9       5185b96f0becf       22 seconds ago      Running             coredns                   0                   6d75146e32ea3
	1276cb3c8928d       46a6bb3c77ce0       22 seconds ago      Running             kube-proxy                0                   f22645641fe39
	db500d906c580       655493523f607       49 seconds ago      Running             kube-scheduler            0                   273c0385ca3db
	7333f9f1213d0       fce326961ae2d       49 seconds ago      Running             etcd                      0                   839cd21b300ae
	ccab12d1308d0       deb04688c4a35       49 seconds ago      Running             kube-apiserver            0                   89f488b5352d4
	1ec61dba91d1e       e9c08e11b07f6       50 seconds ago      Running             kube-controller-manager   0                   e48d5d98d9926
	
	* 
	* ==> coredns [6a8f65c7149c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = dc373b1a880fdd4ccb700cff30600cc4bf8c50378309c853254a8500867351a3e9142cc9578843a443961b28e6690d646b490f89e043822a41fbe79aabc9a951
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:33633 - 34572 "HINFO IN 7078759061569732506.7153018791202959406. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034606205s
	
	* 
	* ==> describe nodes <==
	* Name:               nokubernetes-928900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=nokubernetes-928900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61
	                    minikube.k8s.io/name=NoKubernetes-928900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_19T04_37_55_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:37:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  nokubernetes-928900
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:38:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:38:16 +0000   Sun, 19 Feb 2023 04:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.255.137
	  Hostname:    nokubernetes-928900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cc83cb192f045599c8425830a963be6
	  System UUID:                250dded1-93d2-1140-b45a-92b4cf99cb94
	  Boot ID:                    46c8b0ab-a693-48a0-ab80-719ecf84a1da
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-jnhkk                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     27s
	  kube-system                 etcd-nokubernetes-928900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         39s
	  kube-system                 kube-apiserver-nokubernetes-928900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-controller-manager-nokubernetes-928900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-proxy-lhrch                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-nokubernetes-928900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  56s (x6 over 56s)  kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x6 over 56s)  kubelet          Node nokubernetes-928900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x5 over 56s)  kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node nokubernetes-928900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node nokubernetes-928900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                39s                kubelet          Node nokubernetes-928900 status is now: NodeReady
	  Normal  RegisteredNode           27s                node-controller  Node nokubernetes-928900 event: Registered Node nokubernetes-928900 in Controller
	
	* 
	* ==> dmesg <==
	* [  +1.411145] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.501864] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.274578] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.689277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +17.465909] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.160541] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[Feb19 04:37] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[ +13.600114] kauditd_printk_skb: 14 callbacks suppressed
	[  +2.175386] systemd-fstab-generator[1080]: Ignoring "noauto" for root device
	[  +1.252246] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +0.183233] systemd-fstab-generator[1129]: Ignoring "noauto" for root device
	[  +0.187655] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
	[  +2.075209] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.065085] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +0.195140] systemd-fstab-generator[1300]: Ignoring "noauto" for root device
	[  +0.195548] systemd-fstab-generator[1311]: Ignoring "noauto" for root device
	[  +0.181230] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +6.792402] systemd-fstab-generator[1570]: Ignoring "noauto" for root device
	[  +0.830935] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.166236] systemd-fstab-generator[2542]: Ignoring "noauto" for root device
	[Feb19 04:38] kauditd_printk_skb: 8 callbacks suppressed
	[ +17.816412] hrtimer: interrupt took 2393310 ns
	
	* 
	* ==> etcd [7333f9f1213d] <==
	* {"level":"warn","ts":"2023-02-19T04:38:07.023Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:06.706Z","time spent":"317.551041ms","remote":"127.0.0.1:50590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4104,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" mod_revision:303 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" value_size:4035 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-nokubernetes-928900\" > >"}
	{"level":"info","ts":"2023-02-19T04:38:15.925Z","caller":"traceutil/trace.go:171","msg":"trace[1786140760] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"243.524782ms","start":"2023-02-19T04:38:15.681Z","end":"2023-02-19T04:38:15.925Z","steps":["trace[1786140760] 'process raft request'  (duration: 243.327881ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:38:16.926Z","caller":"traceutil/trace.go:171","msg":"trace[1853276706] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"179.894697ms","start":"2023-02-19T04:38:16.746Z","end":"2023-02-19T04:38:16.926Z","steps":["trace[1853276706] 'process raft request'  (duration: 179.716896ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:24.180Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"152.855608ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10356456599194062041 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.28.255.137\" mod_revision:372 > success:<request_put:<key:\"/registry/masterleases/172.28.255.137\" value_size:67 lease:1133084562339286231 >> failure:<request_range:<key:\"/registry/masterleases/172.28.255.137\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:38:24.181Z","caller":"traceutil/trace.go:171","msg":"trace[176435779] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"351.805645ms","start":"2023-02-19T04:38:23.829Z","end":"2023-02-19T04:38:24.181Z","steps":["trace[176435779] 'process raft request'  (duration: 197.885333ms)","trace[176435779] 'compare'  (duration: 152.658207ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:24.181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:23.829Z","time spent":"351.985946ms","remote":"127.0.0.1:50566","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.28.255.137\" mod_revision:372 > success:<request_put:<key:\"/registry/masterleases/172.28.255.137\" value_size:67 lease:1133084562339286231 >> failure:<request_range:<key:\"/registry/masterleases/172.28.255.137\" > >"}
	{"level":"info","ts":"2023-02-19T04:38:24.183Z","caller":"traceutil/trace.go:171","msg":"trace[51475071] linearizableReadLoop","detail":"{readStateIndex:408; appliedIndex:407; }","duration":"166.105569ms","start":"2023-02-19T04:38:24.016Z","end":"2023-02-19T04:38:24.183Z","steps":["trace[51475071] 'read index received'  (duration: 10.73925ms)","trace[51475071] 'applied index is now lower than readState.Index'  (duration: 155.364619ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:24.183Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"166.45337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:611"}
	{"level":"info","ts":"2023-02-19T04:38:24.183Z","caller":"traceutil/trace.go:171","msg":"trace[1781531712] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:391; }","duration":"166.690771ms","start":"2023-02-19T04:38:24.016Z","end":"2023-02-19T04:38:24.183Z","steps":["trace[1781531712] 'agreement among raft nodes before linearized reading'  (duration: 166.281369ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:38:24.349Z","caller":"traceutil/trace.go:171","msg":"trace[710553711] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"157.955831ms","start":"2023-02-19T04:38:24.191Z","end":"2023-02-19T04:38:24.349Z","steps":["trace[710553711] 'process raft request'  (duration: 139.629646ms)","trace[710553711] 'compare'  (duration: 18.187985ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-19T04:38:27.669Z","caller":"traceutil/trace.go:171","msg":"trace[102715577] linearizableReadLoop","detail":"{readStateIndex:411; appliedIndex:410; }","duration":"244.665162ms","start":"2023-02-19T04:38:27.424Z","end":"2023-02-19T04:38:27.668Z","steps":["trace[102715577] 'read index received'  (duration: 243.345456ms)","trace[102715577] 'applied index is now lower than readState.Index'  (duration: 1.318206ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-19T04:38:27.669Z","caller":"traceutil/trace.go:171","msg":"trace[1320072997] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"327.305621ms","start":"2023-02-19T04:38:27.342Z","end":"2023-02-19T04:38:27.669Z","steps":["trace[1320072997] 'process raft request'  (duration: 326.416417ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:27.670Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:27.342Z","time spent":"327.962123ms","remote":"127.0.0.1:50620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":565,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" mod_revision:386 > success:<request_put:<key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" value_size:505 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/nokubernetes-928900\" > >"}
	{"level":"warn","ts":"2023-02-19T04:38:27.669Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"245.534965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-19T04:38:27.670Z","caller":"traceutil/trace.go:171","msg":"trace[462139626] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:394; }","duration":"246.114968ms","start":"2023-02-19T04:38:27.424Z","end":"2023-02-19T04:38:27.670Z","steps":["trace[462139626] 'agreement among raft nodes before linearized reading'  (duration: 245.507465ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:38:32.716Z","caller":"traceutil/trace.go:171","msg":"trace[111809434] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"118.963966ms","start":"2023-02-19T04:38:32.597Z","end":"2023-02-19T04:38:32.716Z","steps":["trace[111809434] 'process raft request'  (duration: 118.400864ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:38:32.796Z","caller":"traceutil/trace.go:171","msg":"trace[1138209748] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"128.444603ms","start":"2023-02-19T04:38:32.667Z","end":"2023-02-19T04:38:32.796Z","steps":["trace[1138209748] 'process raft request'  (duration: 128.317602ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:38:34.070Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"190.863728ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10356456599194062077 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.28.255.137\" mod_revision:391 > success:<request_put:<key:\"/registry/masterleases/172.28.255.137\" value_size:67 lease:1133084562339286267 >> failure:<request_range:<key:\"/registry/masterleases/172.28.255.137\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:38:34.070Z","caller":"traceutil/trace.go:171","msg":"trace[587048524] linearizableReadLoop","detail":"{readStateIndex:418; appliedIndex:417; }","duration":"143.647247ms","start":"2023-02-19T04:38:33.926Z","end":"2023-02-19T04:38:34.070Z","steps":["trace[587048524] 'read index received'  (duration: 233.601µs)","trace[587048524] 'applied index is now lower than readState.Index'  (duration: 143.412546ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:34.070Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"143.829148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-19T04:38:34.070Z","caller":"traceutil/trace.go:171","msg":"trace[1166238602] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:399; }","duration":"143.852848ms","start":"2023-02-19T04:38:33.926Z","end":"2023-02-19T04:38:34.070Z","steps":["trace[1166238602] 'agreement among raft nodes before linearized reading'  (duration: 143.714647ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:38:34.071Z","caller":"traceutil/trace.go:171","msg":"trace[1670111092] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"341.193505ms","start":"2023-02-19T04:38:33.729Z","end":"2023-02-19T04:38:34.071Z","steps":["trace[1670111092] 'process raft request'  (duration: 149.538174ms)","trace[1670111092] 'compare'  (duration: 190.717128ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:38:34.071Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:38:33.729Z","time spent":"341.309806ms","remote":"127.0.0.1:50566","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.28.255.137\" mod_revision:391 > success:<request_put:<key:\"/registry/masterleases/172.28.255.137\" value_size:67 lease:1133084562339286267 >> failure:<request_range:<key:\"/registry/masterleases/172.28.255.137\" > >"}
	{"level":"warn","ts":"2023-02-19T04:38:34.575Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.172281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-19T04:38:34.575Z","caller":"traceutil/trace.go:171","msg":"trace[1833982384] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:399; }","duration":"154.319881ms","start":"2023-02-19T04:38:34.421Z","end":"2023-02-19T04:38:34.575Z","steps":["trace[1833982384] 'range keys from in-memory index tree'  (duration: 154.04978ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:38:35 up 2 min,  0 users,  load average: 2.53, 0.88, 0.32
	Linux NoKubernetes-928900 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ccab12d1308d] <==
	* I0219 04:37:51.167316       1 controller.go:615] quota admission added evaluator for: namespaces
	I0219 04:37:51.362901       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0219 04:37:51.603166       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0219 04:37:52.016732       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0219 04:37:52.038384       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0219 04:37:52.038545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0219 04:37:53.407267       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0219 04:37:53.489614       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0219 04:37:53.704828       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0219 04:37:53.721146       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.28.255.137]
	I0219 04:37:53.723035       1 controller.go:615] quota admission added evaluator for: endpoints
	I0219 04:37:53.745755       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0219 04:37:54.098836       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0219 04:37:55.426489       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0219 04:37:55.459845       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0219 04:37:55.476900       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0219 04:38:06.683345       1 trace.go:219] Trace[565429588]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9effddd2-829a-4208-bd01-7c8a72431b6c,client:172.28.255.137,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/nokubernetes-928900,user-agent:kubelet/v1.26.1 (linux/amd64) kubernetes/8f94681,verb:PUT (19-Feb-2023 04:38:06.080) (total time: 602ms):
	Trace[565429588]: ["GuaranteedUpdate etcd3" audit-id:9effddd2-829a-4208-bd01-7c8a72431b6c,key:/leases/kube-node-lease/nokubernetes-928900,type:*coordination.Lease,resource:leases.coordination.k8s.io 602ms (04:38:06.080)
	Trace[565429588]:  ---"Txn call completed" 601ms (04:38:06.682)]
	Trace[565429588]: [602.934063ms] [602.934063ms] END
	I0219 04:38:06.689056       1 trace.go:219] Trace[1682182241]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d850065f-9249-4115-9ff4-727847696731,client:172.28.255.137,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-scheduler-nokubernetes-928900,user-agent:kubelet/v1.26.1 (linux/amd64) kubernetes/8f94681,verb:GET (19-Feb-2023 04:38:06.158) (total time: 530ms):
	Trace[1682182241]: ---"About to write a response" 530ms (04:38:06.688)
	Trace[1682182241]: [530.454151ms] [530.454151ms] END
	I0219 04:38:08.257914       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0219 04:38:08.276040       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [1ec61dba91d1] <==
	* I0219 04:38:08.117091       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0219 04:38:08.117528       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0219 04:38:08.117954       1 taint_manager.go:211] "Sending events to api server"
	I0219 04:38:08.132713       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0219 04:38:08.133959       1 node_lifecycle_controller.go:1053] Missing timestamp for Node nokubernetes-928900. Assuming now as a timestamp.
	I0219 04:38:08.134350       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0219 04:38:08.132997       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0219 04:38:08.136176       1 shared_informer.go:280] Caches are synced for persistent volume
	I0219 04:38:08.138077       1 event.go:294] "Event occurred" object="nokubernetes-928900" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node nokubernetes-928900 event: Registered Node nokubernetes-928900 in Controller"
	I0219 04:38:08.143846       1 shared_informer.go:280] Caches are synced for attach detach
	I0219 04:38:08.144621       1 shared_informer.go:280] Caches are synced for crt configmap
	I0219 04:38:08.147179       1 shared_informer.go:280] Caches are synced for HPA
	I0219 04:38:08.148621       1 shared_informer.go:280] Caches are synced for expand
	I0219 04:38:08.169933       1 range_allocator.go:372] Set node nokubernetes-928900 PodCIDR to [10.244.0.0/24]
	I0219 04:38:08.199059       1 shared_informer.go:280] Caches are synced for deployment
	I0219 04:38:08.203826       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0219 04:38:08.242423       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:38:08.250298       1 shared_informer.go:280] Caches are synced for disruption
	I0219 04:38:08.287764       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:38:08.300479       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 1"
	I0219 04:38:08.301195       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lhrch"
	I0219 04:38:08.407652       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-jnhkk"
	I0219 04:38:08.618798       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:38:08.622027       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:38:08.622178       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [1276cb3c8928] <==
	* I0219 04:38:12.882857       1 node.go:163] Successfully retrieved node IP: 172.28.255.137
	I0219 04:38:12.883032       1 server_others.go:109] "Detected node IP" address="172.28.255.137"
	I0219 04:38:12.883102       1 server_others.go:535] "Using iptables proxy"
	I0219 04:38:12.995878       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:38:12.995973       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:38:12.996289       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:38:12.996677       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:38:12.996691       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:38:12.997986       1 config.go:317] "Starting service config controller"
	I0219 04:38:12.998026       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:38:12.998059       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:38:12.998064       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:38:13.005963       1 config.go:444] "Starting node config controller"
	I0219 04:38:13.006139       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:38:13.099173       1 shared_informer.go:280] Caches are synced for service config
	I0219 04:38:13.099321       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:38:13.107382       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [db500d906c58] <==
	* W0219 04:37:52.087077       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0219 04:37:52.087108       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0219 04:37:52.169403       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0219 04:37:52.169455       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0219 04:37:52.188545       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0219 04:37:52.190402       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0219 04:37:52.233730       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0219 04:37:52.233841       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0219 04:37:52.250044       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0219 04:37:52.250093       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0219 04:37:52.252066       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0219 04:37:52.252178       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0219 04:37:52.328919       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0219 04:37:52.328984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0219 04:37:52.337320       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0219 04:37:52.337365       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0219 04:37:52.546029       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0219 04:37:52.546064       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0219 04:37:52.596615       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0219 04:37:52.596680       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0219 04:37:52.640485       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0219 04:37:52.640570       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0219 04:37:52.648370       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0219 04:37:52.648580       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0219 04:37:55.889628       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-02-19 04:36:23 UTC, ends at Sun 2023-02-19 04:38:35 UTC. --
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: W0219 04:38:08.461609    2567 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:nokubernetes-928900" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'nokubernetes-928900' and this object
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: E0219 04:38:08.461835    2567 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:nokubernetes-928900" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'nokubernetes-928900' and this object
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.495772    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-config-volume\") pod \"coredns-787d4945fb-jnhkk\" (UID: \"014f9e4c-8fef-491f-8fea-aa8c38cdaba4\") " pod="kube-system/coredns-787d4945fb-jnhkk"
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.495929    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzpb\" (UniqueName: \"kubernetes.io/projected/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-kube-api-access-btzpb\") pod \"storage-provisioner\" (UID: \"10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd\") " pod="kube-system/storage-provisioner"
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.496021    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-tmp\") pod \"storage-provisioner\" (UID: \"10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd\") " pod="kube-system/storage-provisioner"
	Feb 19 04:38:08 NoKubernetes-928900 kubelet[2567]: I0219 04:38:08.496116    2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrh99\" (UniqueName: \"kubernetes.io/projected/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-kube-api-access-xrh99\") pod \"coredns-787d4945fb-jnhkk\" (UID: \"014f9e4c-8fef-491f-8fea-aa8c38cdaba4\") " pod="kube-system/coredns-787d4945fb-jnhkk"
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.497617    2567 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.497734    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26e578c4-0e56-491d-b765-a3d763882b2f-kube-proxy podName:26e578c4-0e56-491d-b765-a3d763882b2f nodeName:}" failed. No retries permitted until 2023-02-19 04:38:09.997709991 +0000 UTC m=+14.648335787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/26e578c4-0e56-491d-b765-a3d763882b2f-kube-proxy") pod "kube-proxy-lhrch" (UID: "26e578c4-0e56-491d-b765-a3d763882b2f") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.544841    2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.544889    2567 projected.go:198] Error preparing data for projected volume kube-api-access-ffb52 for pod kube-system/kube-proxy-lhrch: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.544974    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26e578c4-0e56-491d-b765-a3d763882b2f-kube-api-access-ffb52 podName:26e578c4-0e56-491d-b765-a3d763882b2f nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.044952901 +0000 UTC m=+14.695578597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ffb52" (UniqueName: "kubernetes.io/projected/26e578c4-0e56-491d-b765-a3d763882b2f-kube-api-access-ffb52") pod "kube-proxy-lhrch" (UID: "26e578c4-0e56-491d-b765-a3d763882b2f") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.597956    2567 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.598516    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-config-volume podName:014f9e4c-8fef-491f-8fea-aa8c38cdaba4 nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.098487052 +0000 UTC m=+14.749112748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-config-volume") pod "coredns-787d4945fb-jnhkk" (UID: "014f9e4c-8fef-491f-8fea-aa8c38cdaba4") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.609274    2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.610111    2567 projected.go:198] Error preparing data for projected volume kube-api-access-btzpb for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.610299    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-kube-api-access-btzpb podName:10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.11028393 +0000 UTC m=+14.760909626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-btzpb" (UniqueName: "kubernetes.io/projected/10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd-kube-api-access-btzpb") pod "storage-provisioner" (UID: "10d0b397-d7f4-4b0b-b5fb-ed44b29f6dbd") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.743749    2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.743820    2567 projected.go:198] Error preparing data for projected volume kube-api-access-xrh99 for pod kube-system/coredns-787d4945fb-jnhkk: failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:09 NoKubernetes-928900 kubelet[2567]: E0219 04:38:09.743891    2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-kube-api-access-xrh99 podName:014f9e4c-8fef-491f-8fea-aa8c38cdaba4 nodeName:}" failed. No retries permitted until 2023-02-19 04:38:10.243871206 +0000 UTC m=+14.894496902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xrh99" (UniqueName: "kubernetes.io/projected/014f9e4c-8fef-491f-8fea-aa8c38cdaba4-kube-api-access-xrh99") pod "coredns-787d4945fb-jnhkk" (UID: "014f9e4c-8fef-491f-8fea-aa8c38cdaba4") : failed to sync configmap cache: timed out waiting for the condition
	Feb 19 04:38:12 NoKubernetes-928900 kubelet[2567]: I0219 04:38:12.825882    2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d75146e32ea3c26a5262ce325a5ad4fb10a96331a9e80bc79ed561325344020"
	Feb 19 04:38:13 NoKubernetes-928900 kubelet[2567]: I0219 04:38:13.169970    2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2593c2cd0913c8204efb90c659a5c84b35dc6067512c0c6a18ee2dd6433747c9"
	Feb 19 04:38:14 NoKubernetes-928900 kubelet[2567]: I0219 04:38:14.267734    2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lhrch" podStartSLOduration=6.267671341 pod.CreationTimestamp="2023-02-19 04:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:38:14.233377542 +0000 UTC m=+18.884003238" watchObservedRunningTime="2023-02-19 04:38:14.267671341 +0000 UTC m=+18.918297137"
	Feb 19 04:38:14 NoKubernetes-928900 kubelet[2567]: I0219 04:38:14.272659    2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jnhkk" podStartSLOduration=6.272001866 pod.CreationTimestamp="2023-02-19 04:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-19 04:38:14.271063861 +0000 UTC m=+18.921689557" watchObservedRunningTime="2023-02-19 04:38:14.272001866 +0000 UTC m=+18.922627662"
	Feb 19 04:38:16 NoKubernetes-928900 kubelet[2567]: I0219 04:38:16.725889    2567 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 19 04:38:16 NoKubernetes-928900 kubelet[2567]: I0219 04:38:16.726966    2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	* 
	* ==> storage-provisioner [8405542e120b] <==
	* I0219 04:38:13.602462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0219 04:38:13.619698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0219 04:38:13.619759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0219 04:38:13.641123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0219 04:38:13.642006       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba74618a-6aac-427d-a93e-d509b288709a", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' NoKubernetes-928900_60717077-f010-4865-9bf0-d40e89b4d863 became leader
	I0219 04:38:13.642462       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_NoKubernetes-928900_60717077-f010-4865-9bf0-d40e89b4d863!
	I0219 04:38:13.743830       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_NoKubernetes-928900_60717077-f010-4865-9bf0-d40e89b4d863!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-928900 -n NoKubernetes-928900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-928900 -n NoKubernetes-928900: (5.6928347s)
helpers_test.go:261: (dbg) Run:  kubectl --context NoKubernetes-928900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestNoKubernetes/serial/StartWithK8s FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestNoKubernetes/serial/StartWithK8s (336.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (360.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.113560527.exe start -p stopped-upgrade-608000 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:191: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.113560527.exe start -p stopped-upgrade-608000 --memory=2200 --vm-driver=hyperv: (3m28.2868405s)
version_upgrade_test.go:200: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.113560527.exe -p stopped-upgrade-608000 stop
version_upgrade_test.go:200: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.113560527.exe -p stopped-upgrade-608000 stop: (21.2069725s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-608000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:206: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p stopped-upgrade-608000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (2m11.2605853s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-608000] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the hyperv driver based on existing profile
	* Starting control plane node stopped-upgrade-608000 in cluster stopped-upgrade-608000
	* Restarting existing hyperv VM for "stopped-upgrade-608000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0219 04:39:52.885224    9596 out.go:296] Setting OutFile to fd 1628 ...
	I0219 04:39:52.960222    9596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:39:52.960222    9596 out.go:309] Setting ErrFile to fd 1484...
	I0219 04:39:52.960222    9596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:39:52.980223    9596 out.go:303] Setting JSON to false
	I0219 04:39:52.985222    9596 start.go:125] hostinfo: {"hostname":"minikube1","uptime":18582,"bootTime":1676763010,"procs":162,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:39:52.985222    9596 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:39:53.790931    9596 out.go:177] * [stopped-upgrade-608000] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:39:53.895074    9596 notify.go:220] Checking for updates...
	I0219 04:39:54.195258    9596 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:39:54.882442    9596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:39:55.592513    9596 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:39:56.343248    9596 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:39:56.577816    9596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:39:56.847521    9596 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0219 04:39:56.847593    9596 start_flags.go:687] config upgrade: Driver=hyperv
	I0219 04:39:56.847593    9596 start_flags.go:699] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0219 04:39:56.847826    9596 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-608000\config.json ...
	I0219 04:39:57.243728    9596 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0219 04:39:57.527832    9596 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:39:59.246978    9596 out.go:177] * Using the hyperv driver based on existing profile
	I0219 04:39:59.350322    9596 start.go:296] selected driver: hyperv
	I0219 04:39:59.350355    9596 start.go:857] validating driver "hyperv" against &{Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.28.244.131 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
}
	I0219 04:39:59.350451    9596 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:39:59.396520    9596 cni.go:84] Creating CNI manager for ""
	I0219 04:39:59.396551    9596 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0219 04:39:59.396551    9596 start_flags.go:319] config:
	{Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.28.244.131 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:39:59.396801    9596 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:39:59.624599    9596 out.go:177] * Starting control plane node stopped-upgrade-608000 in cluster stopped-upgrade-608000
	I0219 04:39:59.794312    9596 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0219 04:39:59.835782    9596 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0219 04:39:59.836058    9596 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-608000\config.json ...
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0219 04:39:59.836058    9596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	I0219 04:39:59.839050    9596 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:39:59.839050    9596 start.go:364] acquiring machines lock for stopped-upgrade-608000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:40:00.019154    9596 cache.go:107] acquiring lock: {Name:mk846f443ad8ebb3f71dcc8a6ad332b2ccd1fb49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.019796    9596 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0219 04:40:00.022149    9596 cache.go:107] acquiring lock: {Name:mkfb2624f831f02f88a5c798c7a43a1bbe61fae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.022214    9596 cache.go:107] acquiring lock: {Name:mkab5ef4697aba25176a9bbf5de0bbfc032f2317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.022617    9596 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0219 04:40:00.022617    9596 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0219 04:40:00.031695    9596 cache.go:107] acquiring lock: {Name:mka45a59e14b38ef0230da2ff86231ec86a62154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.031695    9596 cache.go:107] acquiring lock: {Name:mk72ecb1f76555793f8c9be18fe62d4a9799d53f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.031695    9596 cache.go:107] acquiring lock: {Name:mk8a34ca3f90bc9ebc6fc19a51807d5bbe286002 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.031695    9596 cache.go:107] acquiring lock: {Name:mkee5b2ba88b1109b760d9a4a39a505ba4aef2c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.031695    9596 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:40:00.031695    9596 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0219 04:40:00.031695    9596 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0219 04:40:00.031695    9596 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0219 04:40:00.031695    9596 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 195.6378ms
	I0219 04:40:00.031695    9596 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0219 04:40:00.031695    9596 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0219 04:40:00.031695    9596 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0219 04:40:00.037682    9596 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0219 04:40:00.043679    9596 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0219 04:40:00.048745    9596 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0219 04:40:00.051685    9596 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0219 04:40:00.053683    9596 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0219 04:40:00.057683    9596 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0219 04:40:00.065698    9596 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	W0219 04:40:00.146458    9596 image.go:187] authn lookup for k8s.gcr.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0219 04:40:00.256073    9596 image.go:187] authn lookup for k8s.gcr.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0219 04:40:00.357684    9596 image.go:187] authn lookup for k8s.gcr.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0219 04:40:00.393939    9596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0
	W0219 04:40:00.450876    9596 image.go:187] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0219 04:40:00.473865    9596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0
	I0219 04:40:00.575854    9596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5
	W0219 04:40:00.593072    9596 image.go:187] authn lookup for k8s.gcr.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0219 04:40:00.707389    9596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	W0219 04:40:00.764258    9596 image.go:187] authn lookup for k8s.gcr.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0219 04:40:00.812745    9596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0
	W0219 04:40:00.859360    9596 image.go:187] authn lookup for k8s.gcr.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0219 04:40:00.913107    9596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 exists
	I0219 04:40:00.914126    9596 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns_1.6.5" took 1.0780721s
	I0219 04:40:00.914126    9596 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.5 succeeded
	I0219 04:40:00.967333    9596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0
	I0219 04:40:01.078113    9596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1
	I0219 04:40:01.264703    9596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 exists
	I0219 04:40:01.264703    9596 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.1" took 1.4286504s
	I0219 04:40:01.268695    9596 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.1 succeeded
	I0219 04:40:01.633723    9596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 exists
	I0219 04:40:01.633723    9596 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.17.0" took 1.7971272s
	I0219 04:40:01.633723    9596 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.17.0 succeeded
	I0219 04:40:01.939599    9596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 exists
	I0219 04:40:01.940599    9596 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.17.0" took 2.1045481s
	I0219 04:40:01.940599    9596 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.17.0 succeeded
	I0219 04:40:02.338906    9596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 exists
	I0219 04:40:02.339276    9596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I0219 04:40:02.339358    9596 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.17.0" took 2.5033093s
	I0219 04:40:02.339451    9596 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 2.5033093s
	I0219 04:40:02.339451    9596 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.17.0 succeeded
	I0219 04:40:02.339525    9596 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0219 04:40:02.339451    9596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 exists
	I0219 04:40:02.339838    9596 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.17.0" took 2.5031864s
	I0219 04:40:02.339892    9596 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.17.0 succeeded
	I0219 04:40:02.339892    9596 cache.go:87] Successfully saved all images to host disk.
	I0219 04:40:48.239573    9596 start.go:368] acquired machines lock for "stopped-upgrade-608000" in 48.4006975s
	I0219 04:40:48.240205    9596 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:40:48.240205    9596 fix.go:55] fixHost starting: minikube
	I0219 04:40:48.240974    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:40:48.948162    9596 main.go:141] libmachine: [stdout =====>] : Off
	
	I0219 04:40:48.948283    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:48.948283    9596 fix.go:103] recreateIfNeeded on stopped-upgrade-608000: state=Stopped err=<nil>
	W0219 04:40:48.948283    9596 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:40:48.952376    9596 out.go:177] * Restarting existing hyperv VM for "stopped-upgrade-608000" ...
	I0219 04:40:48.954901    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM stopped-upgrade-608000
	I0219 04:40:50.584967    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:40:50.584967    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:50.584967    9596 main.go:141] libmachine: Waiting for host to start...
	I0219 04:40:50.584967    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:40:51.332998    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:40:51.332998    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:51.332998    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:40:52.450879    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:40:52.450879    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:53.455908    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:40:54.290953    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:40:54.290953    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:54.290953    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:40:55.400831    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:40:55.400831    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:56.404908    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:40:57.149643    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:40:57.149643    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:57.149643    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:40:58.203921    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:40:58.203921    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:59.208482    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:40:59.955333    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:40:59.955333    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:40:59.955706    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:00.975731    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:00.975731    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:01.980994    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:02.711986    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:02.712177    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:02.712177    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:03.791527    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:03.791527    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:04.792188    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:05.541086    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:05.541086    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:05.541160    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:06.590094    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:06.590094    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:07.596825    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:08.363272    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:08.363272    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:08.363272    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:09.380751    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:09.380751    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:10.395154    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:11.137878    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:11.137878    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:11.137878    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:12.152960    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:12.153113    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:13.157157    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:13.918480    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:13.918480    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:13.918480    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:15.044108    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:15.044458    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:16.046553    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:16.822346    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:16.822508    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:16.822583    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:17.882994    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:17.883212    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:18.884513    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:19.630757    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:19.630757    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:19.630757    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:20.708235    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:20.708497    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:21.723456    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:22.561634    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:22.561885    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:22.561885    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:23.674333    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:23.674333    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:24.686194    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:25.477929    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:25.478073    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:25.478073    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:26.558793    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:26.559010    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:27.570626    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:28.348325    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:28.348325    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:28.348421    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:29.409308    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:29.409483    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:30.424485    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:31.200698    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:31.200934    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:31.201013    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:32.263838    9596 main.go:141] libmachine: [stdout =====>] : 
	I0219 04:41:32.263908    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:33.264990    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:34.023570    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:34.023570    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:34.023570    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:35.126964    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:35.127157    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:35.130805    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:35.840918    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:35.840918    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:35.841133    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:36.903473    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:36.903547    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:36.903672    9596 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-608000\config.json ...
	I0219 04:41:36.906289    9596 machine.go:88] provisioning docker machine ...
	I0219 04:41:36.906375    9596 buildroot.go:166] provisioning hostname "stopped-upgrade-608000"
	I0219 04:41:36.906375    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:37.613415    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:37.613580    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:37.613702    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:38.702570    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:38.702570    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:38.706638    9596 main.go:141] libmachine: Using SSH client type: native
	I0219 04:41:38.708104    9596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.131 22 <nil> <nil>}
	I0219 04:41:38.708104    9596 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-608000 && echo "stopped-upgrade-608000" | sudo tee /etc/hostname
	I0219 04:41:38.855414    9596 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-608000
	
	I0219 04:41:38.855477    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:39.617200    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:39.617276    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:39.617378    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:40.742285    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:40.742594    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:40.746741    9596 main.go:141] libmachine: Using SSH client type: native
	I0219 04:41:40.747473    9596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.131 22 <nil> <nil>}
	I0219 04:41:40.747473    9596 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-608000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-608000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-608000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:41:40.878583    9596 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:41:40.878583    9596 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:41:40.879969    9596 buildroot.go:174] setting up certificates
	I0219 04:41:40.879969    9596 provision.go:83] configureAuth start
	I0219 04:41:40.880095    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:41.659579    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:41.659579    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:41.659579    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:42.734902    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:42.735089    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:42.735280    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:43.509883    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:43.510073    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:43.510126    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:44.590427    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:44.590583    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:44.590583    9596 provision.go:138] copyHostCerts
	I0219 04:41:44.590964    9596 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:41:44.591051    9596 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:41:44.591538    9596 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:41:44.592292    9596 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:41:44.592292    9596 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:41:44.592292    9596 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:41:44.593265    9596 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:41:44.593265    9596 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:41:44.594267    9596 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:41:44.595284    9596 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-608000 san=[172.28.244.131 172.28.244.131 localhost 127.0.0.1 minikube stopped-upgrade-608000]
	I0219 04:41:44.754284    9596 provision.go:172] copyRemoteCerts
	I0219 04:41:44.764682    9596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:41:44.765666    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:45.483360    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:45.483360    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:45.483360    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:46.548751    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:46.548751    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:46.548751    9596 sshutil.go:53] new ssh client: &{IP:172.28.244.131 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-608000\id_rsa Username:docker}
	I0219 04:41:46.654207    9596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.8885474s)
	I0219 04:41:46.654207    9596 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:41:46.677054    9596 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0219 04:41:46.695865    9596 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:41:46.715816    9596 provision.go:86] duration metric: configureAuth took 5.8358073s
	I0219 04:41:46.715816    9596 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:41:46.715816    9596 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0219 04:41:46.715816    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:47.436755    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:47.436791    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:47.436863    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:48.535205    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:48.535259    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:48.539642    9596 main.go:141] libmachine: Using SSH client type: native
	I0219 04:41:48.540894    9596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.131 22 <nil> <nil>}
	I0219 04:41:48.540894    9596 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:41:48.684297    9596 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:41:48.684297    9596 buildroot.go:70] root file system type: tmpfs
	I0219 04:41:48.684297    9596 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:41:48.684297    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:49.441589    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:49.441589    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:49.441659    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:50.537393    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:50.537393    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:50.542301    9596 main.go:141] libmachine: Using SSH client type: native
	I0219 04:41:50.543210    9596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.131 22 <nil> <nil>}
	I0219 04:41:50.543210    9596 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:41:50.677964    9596 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:41:50.677964    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:51.405764    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:51.405930    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:51.405986    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:52.512116    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:52.512293    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:52.517278    9596 main.go:141] libmachine: Using SSH client type: native
	I0219 04:41:52.518495    9596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.131 22 <nil> <nil>}
	I0219 04:41:52.518562    9596 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:41:53.917535    9596 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:41:53.917535    9596 machine.go:91] provisioned docker machine in 17.011221s
	I0219 04:41:53.917652    9596 start.go:300] post-start starting for "stopped-upgrade-608000" (driver="hyperv")
	I0219 04:41:53.917652    9596 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:41:53.927616    9596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:41:53.927616    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:54.689322    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:54.689498    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:54.689581    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:55.755418    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:55.755635    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:55.756104    9596 sshutil.go:53] new ssh client: &{IP:172.28.244.131 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-608000\id_rsa Username:docker}
	I0219 04:41:55.863652    9596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9360101s)
	I0219 04:41:55.874638    9596 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:41:55.881784    9596 info.go:137] Remote host: Buildroot 2019.02.7
	I0219 04:41:55.881784    9596 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:41:55.881784    9596 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:41:55.883147    9596 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:41:55.893964    9596 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:41:55.902896    9596 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:41:55.921376    9596 start.go:303] post-start completed in 2.0037306s
	I0219 04:41:55.921376    9596 fix.go:57] fixHost completed within 1m7.6814181s
	I0219 04:41:55.921376    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:56.659486    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:56.659486    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:56.659594    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:57.691861    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:57.691861    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:57.696757    9596 main.go:141] libmachine: Using SSH client type: native
	I0219 04:41:57.697454    9596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.244.131 22 <nil> <nil>}
	I0219 04:41:57.697454    9596 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0219 04:41:57.838017    9596 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781717.828162485
	
	I0219 04:41:57.838113    9596 fix.go:207] guest clock: 1676781717.828162485
	I0219 04:41:57.838113    9596 fix.go:220] Guest: 2023-02-19 04:41:57.828162485 +0000 GMT Remote: 2023-02-19 04:41:55.9213762 +0000 GMT m=+123.189997701 (delta=1.906786285s)
	I0219 04:41:57.838199    9596 fix.go:191] guest clock delta is within tolerance: 1.906786285s
	I0219 04:41:57.838199    9596 start.go:83] releasing machines lock for "stopped-upgrade-608000", held for 1m9.5988795s
	I0219 04:41:57.838441    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:58.592852    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:41:58.592892    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:58.592948    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:41:59.708717    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:41:59.708717    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:41:59.711802    9596 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0219 04:41:59.711802    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:41:59.720379    9596 ssh_runner.go:195] Run: cat /version.json
	I0219 04:41:59.720379    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-608000 ).state
	I0219 04:42:00.514857    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:42:00.514857    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:00.514857    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:42:00.522682    9596 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:42:00.522884    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:00.522916    9596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-608000 ).networkadapters[0]).ipaddresses[0]
	I0219 04:42:01.689899    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:42:01.689899    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:01.689899    9596 sshutil.go:53] new ssh client: &{IP:172.28.244.131 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-608000\id_rsa Username:docker}
	I0219 04:42:01.733944    9596 main.go:141] libmachine: [stdout =====>] : 172.28.244.131
	
	I0219 04:42:01.734697    9596 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:42:01.735103    9596 sshutil.go:53] new ssh client: &{IP:172.28.244.131 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-608000\id_rsa Username:docker}
	I0219 04:42:01.790421    9596 ssh_runner.go:235] Completed: cat /version.json: (2.0700502s)
	W0219 04:42:01.790505    9596 start.go:396] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0219 04:42:01.798923    9596 ssh_runner.go:195] Run: systemctl --version
	I0219 04:42:01.861486    9596 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.1496916s)
	I0219 04:42:01.871677    9596 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:42:01.881034    9596 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:42:01.891803    9596 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0219 04:42:01.909597    9596 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0219 04:42:01.917331    9596 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0219 04:42:01.917434    9596 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0219 04:42:01.917434    9596 start.go:485] detecting cgroup driver to use...
	I0219 04:42:01.917670    9596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:42:01.941600    9596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0219 04:42:01.958601    9596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:42:01.968910    9596 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:42:01.976596    9596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:42:01.992616    9596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:42:02.011882    9596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:42:02.029828    9596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:42:02.046821    9596 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:42:02.063827    9596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:42:02.083844    9596 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:42:02.099834    9596 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:42:02.116825    9596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:42:02.230163    9596 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:42:02.249031    9596 start.go:485] detecting cgroup driver to use...
	I0219 04:42:02.259733    9596 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:42:02.280574    9596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:42:02.300438    9596 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:42:02.372427    9596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:42:02.397363    9596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:42:02.411846    9596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:42:02.444216    9596 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:42:02.561058    9596 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:42:02.673836    9596 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:42:02.673883    9596 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:42:02.699366    9596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:42:02.817176    9596 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:42:03.903003    9596 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0858311s)
	I0219 04:42:03.910308    9596 out.go:177] 
	W0219 04:42:03.913501    9596 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0219 04:42:03.913501    9596 out.go:239] * 
	* 
	W0219 04:42:03.914562    9596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0219 04:42:03.917226    9596 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:208: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p stopped-upgrade-608000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (360.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (234.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-061400 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-061400 --alsologtostderr -v=1 --driver=hyperv: (3m20.4119404s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-061400] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node pause-061400 in cluster pause-061400
	* Updating the running hyperv "pause-061400" VM ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	  - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	* Done! kubectl is now configured to use "pause-061400" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0219 04:41:52.310106   11108 out.go:296] Setting OutFile to fd 1484 ...
	I0219 04:41:52.369933   11108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:41:52.369933   11108 out.go:309] Setting ErrFile to fd 924...
	I0219 04:41:52.369933   11108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:41:52.388524   11108 out.go:303] Setting JSON to false
	I0219 04:41:52.391581   11108 start.go:125] hostinfo: {"hostname":"minikube1","uptime":18701,"bootTime":1676763010,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:41:52.391829   11108 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:41:52.399064   11108 out.go:177] * [pause-061400] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:41:52.402972   11108 notify.go:220] Checking for updates...
	I0219 04:41:52.405686   11108 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:41:52.409814   11108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:41:52.411301   11108 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:41:52.415564   11108 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:41:52.417906   11108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:41:52.421517   11108 config.go:182] Loaded profile config "pause-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:41:52.422578   11108 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:41:54.123845   11108 out.go:177] * Using the hyperv driver based on existing profile
	I0219 04:41:54.126664   11108 start.go:296] selected driver: hyperv
	I0219 04:41:54.126664   11108 start.go:857] validating driver "hyperv" against &{Name:pause-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.26.1 ClusterName:pause-061400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.210 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:41:54.126664   11108 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:41:54.176076   11108 cni.go:84] Creating CNI manager for ""
	I0219 04:41:54.176076   11108 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:41:54.176076   11108 start_flags.go:319] config:
	{Name:pause-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:pause-061400 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.210 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registr
y-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:41:54.176791   11108 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:41:54.181085   11108 out.go:177] * Starting control plane node pause-061400 in cluster pause-061400
	I0219 04:41:54.183317   11108 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:41:54.183317   11108 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0219 04:41:54.183317   11108 cache.go:57] Caching tarball of preloaded images
	I0219 04:41:54.184323   11108 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:41:54.184323   11108 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 04:41:54.184323   11108 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\config.json ...
	I0219 04:41:54.186320   11108 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:41:54.186320   11108 start.go:364] acquiring machines lock for pause-061400: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:43:27.388262   11108 start.go:368] acquired machines lock for "pause-061400" in 1m33.2022809s
	I0219 04:43:27.388262   11108 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:43:27.388262   11108 fix.go:55] fixHost starting: 
	I0219 04:43:27.389260   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:28.198989   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:28.199188   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:28.199188   11108 fix.go:103] recreateIfNeeded on pause-061400: state=Running err=<nil>
	W0219 04:43:28.199188   11108 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:43:28.203470   11108 out.go:177] * Updating the running hyperv "pause-061400" VM ...
	I0219 04:43:28.206027   11108 machine.go:88] provisioning docker machine ...
	I0219 04:43:28.206109   11108 buildroot.go:166] provisioning hostname "pause-061400"
	I0219 04:43:28.206206   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:28.961448   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:28.961523   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:28.961596   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:30.132981   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:30.133154   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:30.138520   11108 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:30.140026   11108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.210 22 <nil> <nil>}
	I0219 04:43:30.140026   11108 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-061400 && echo "pause-061400" | sudo tee /etc/hostname
	I0219 04:43:30.337181   11108 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-061400
	
	I0219 04:43:30.337729   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:31.114599   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:31.114668   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:31.114668   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:32.279106   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:32.279106   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:32.283841   11108 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:32.284700   11108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.210 22 <nil> <nil>}
	I0219 04:43:32.284833   11108 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-061400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-061400/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-061400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:43:32.426805   11108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:43:32.426805   11108 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:43:32.426805   11108 buildroot.go:174] setting up certificates
	I0219 04:43:32.426805   11108 provision.go:83] configureAuth start
	I0219 04:43:32.427353   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:33.197250   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:33.197250   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:33.197250   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:34.319604   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:34.319604   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:34.319604   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:35.097012   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:35.097169   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:35.097192   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:36.264476   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:36.264476   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:36.264476   11108 provision.go:138] copyHostCerts
	I0219 04:43:36.264476   11108 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:43:36.264476   11108 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:43:36.265263   11108 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:43:36.266759   11108 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:43:36.266759   11108 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:43:36.267206   11108 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:43:36.268522   11108 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:43:36.268522   11108 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:43:36.268522   11108 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:43:36.270432   11108 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-061400 san=[172.28.246.210 172.28.246.210 localhost 127.0.0.1 minikube pause-061400]
	I0219 04:43:36.615452   11108 provision.go:172] copyRemoteCerts
	I0219 04:43:36.624403   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:43:36.624403   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:37.424241   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:37.424321   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:37.424321   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:38.563001   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:38.563077   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:38.563177   11108 sshutil.go:53] new ssh client: &{IP:172.28.246.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-061400\id_rsa Username:docker}
	I0219 04:43:38.681093   11108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.056697s)
	I0219 04:43:38.681351   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0219 04:43:38.734654   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:43:38.789930   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0219 04:43:38.835957   11108 provision.go:86] duration metric: configureAuth took 6.4091746s
	I0219 04:43:38.835957   11108 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:43:38.836891   11108 config.go:182] Loaded profile config "pause-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:43:38.836891   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:39.681845   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:39.681956   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:39.682025   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:40.778959   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:40.778959   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:40.782827   11108 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:40.783833   11108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.210 22 <nil> <nil>}
	I0219 04:43:40.783833   11108 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:43:40.945214   11108 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:43:40.945214   11108 buildroot.go:70] root file system type: tmpfs
	I0219 04:43:40.945214   11108 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:43:40.945214   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:41.721369   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:41.721369   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:41.721369   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:42.845008   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:42.845008   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:42.850227   11108 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:42.850908   11108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.210 22 <nil> <nil>}
	I0219 04:43:42.850908   11108 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:43:43.035211   11108 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:43:43.035211   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:43.826105   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:43.826477   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:43.826560   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:44.936578   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:44.936578   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:44.941011   11108 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:44.941794   11108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.210 22 <nil> <nil>}
	I0219 04:43:44.941794   11108 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:43:45.097182   11108 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:43:45.097182   11108 machine.go:91] provisioned docker machine in 16.8912163s
	I0219 04:43:45.097182   11108 start.go:300] post-start starting for "pause-061400" (driver="hyperv")
	I0219 04:43:45.097182   11108 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:43:45.107680   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:43:45.107680   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:45.889609   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:45.889741   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:45.889980   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:47.024257   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:47.024257   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:47.024257   11108 sshutil.go:53] new ssh client: &{IP:172.28.246.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-061400\id_rsa Username:docker}
	I0219 04:43:47.136014   11108 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0273879s)
	I0219 04:43:47.145141   11108 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:43:47.151396   11108 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:43:47.151396   11108 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:43:47.152032   11108 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:43:47.152710   11108 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:43:47.164521   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:43:47.181176   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:43:47.224980   11108 start.go:303] post-start completed in 2.1278052s
	I0219 04:43:47.224980   11108 fix.go:57] fixHost completed within 19.8367892s
	I0219 04:43:47.224980   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:48.052094   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:48.052094   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:48.052094   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:49.149201   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:49.149360   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:49.152981   11108 main.go:141] libmachine: Using SSH client type: native
	I0219 04:43:49.154032   11108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.246.210 22 <nil> <nil>}
	I0219 04:43:49.154107   11108 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0219 04:43:49.317392   11108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781829.310354567
	
	I0219 04:43:49.317392   11108 fix.go:207] guest clock: 1676781829.310354567
	I0219 04:43:49.317392   11108 fix.go:220] Guest: 2023-02-19 04:43:49.310354567 +0000 GMT Remote: 2023-02-19 04:43:47.2249801 +0000 GMT m=+115.032209601 (delta=2.085374467s)
	I0219 04:43:49.317392   11108 fix.go:191] guest clock delta is within tolerance: 2.085374467s
	I0219 04:43:49.317392   11108 start.go:83] releasing machines lock for "pause-061400", held for 21.9292088s
	I0219 04:43:49.317392   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:50.081340   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:50.081495   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:50.081495   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:51.140922   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:51.141097   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:51.144917   11108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:43:51.144917   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:51.152123   11108 ssh_runner.go:195] Run: cat /version.json
	I0219 04:43:51.152123   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061400 ).state
	I0219 04:43:51.920647   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:51.920706   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:51.920706   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:43:51.920706   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:51.920706   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:51.920706   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061400 ).networkadapters[0]).ipaddresses[0]
	I0219 04:43:53.078199   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:53.078199   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:53.078199   11108 sshutil.go:53] new ssh client: &{IP:172.28.246.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-061400\id_rsa Username:docker}
	I0219 04:43:53.099628   11108 main.go:141] libmachine: [stdout =====>] : 172.28.246.210
	
	I0219 04:43:53.099628   11108 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:43:53.099628   11108 sshutil.go:53] new ssh client: &{IP:172.28.246.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-061400\id_rsa Username:docker}
	I0219 04:43:53.178073   11108 ssh_runner.go:235] Completed: cat /version.json: (2.0259572s)
	I0219 04:43:53.188818   11108 ssh_runner.go:195] Run: systemctl --version
	I0219 04:43:53.245838   11108 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1009282s)
	I0219 04:43:53.254888   11108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:43:53.263396   11108 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:43:53.272472   11108 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:43:53.288043   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:43:53.303195   11108 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:43:53.343511   11108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:43:53.356915   11108 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0219 04:43:53.356915   11108 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:43:53.367702   11108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:43:53.402384   11108 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:43:53.402453   11108 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:43:53.402453   11108 start.go:485] detecting cgroup driver to use...
	I0219 04:43:53.402673   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:43:53.447710   11108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:43:53.475570   11108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:43:53.494563   11108 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:43:53.505897   11108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:43:53.533021   11108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:43:53.565032   11108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:43:53.593041   11108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:43:53.622159   11108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:43:53.650983   11108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:43:53.675368   11108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:43:53.701236   11108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:43:53.729937   11108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:43:53.971268   11108 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:43:54.001142   11108 start.go:485] detecting cgroup driver to use...
	I0219 04:43:54.011921   11108 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:43:54.043418   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:43:54.077362   11108 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:43:54.115364   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:43:54.148955   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:43:54.176219   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:43:54.221760   11108 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:43:54.450468   11108 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:43:54.664357   11108 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:43:54.664357   11108 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:43:54.728749   11108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:43:54.959282   11108 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:44:13.383120   11108 ssh_runner.go:235] Completed: sudo systemctl restart docker: (18.423905s)
	I0219 04:44:13.395911   11108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:44:13.627570   11108 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:44:13.916780   11108 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:44:14.167611   11108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:44:14.450228   11108 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:44:14.488487   11108 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:44:14.498444   11108 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:44:14.506623   11108 start.go:553] Will wait 60s for crictl version
	I0219 04:44:14.514619   11108 ssh_runner.go:195] Run: which crictl
	I0219 04:44:14.538044   11108 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:44:15.471704   11108 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:44:15.480379   11108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:44:15.552954   11108 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:44:15.703317   11108 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:44:15.703615   11108 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:44:15.711591   11108 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:44:15.711591   11108 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:44:15.711591   11108 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:44:15.711591   11108 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:44:15.714899   11108 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:44:15.714899   11108 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:44:15.724313   11108 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:44:15.730435   11108 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:44:15.740402   11108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:44:15.784277   11108 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:44:15.784353   11108 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:44:15.792052   11108 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:44:15.850760   11108 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:44:15.850760   11108 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:44:15.859180   11108 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:44:16.944555   11108 ssh_runner.go:235] Completed: docker info --format {{.CgroupDriver}}: (1.0853791s)
	I0219 04:44:16.944555   11108 cni.go:84] Creating CNI manager for ""
	I0219 04:44:16.944555   11108 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:44:16.944555   11108 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:44:16.944555   11108 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.246.210 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-061400 NodeName:pause-061400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.246.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.246.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:44:16.944555   11108 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.246.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-061400"
	  kubeletExtraArgs:
	    node-ip: 172.28.246.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.246.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:44:16.945147   11108 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-061400 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.246.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:pause-061400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:44:16.955681   11108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:44:16.971425   11108 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:44:16.982060   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:44:17.004750   11108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (446 bytes)
	I0219 04:44:17.034258   11108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:44:17.065805   11108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0219 04:44:17.109642   11108 ssh_runner.go:195] Run: grep 172.28.246.210	control-plane.minikube.internal$ /etc/hosts
	I0219 04:44:17.116611   11108 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400 for IP: 172.28.246.210
	I0219 04:44:17.116611   11108 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:44:17.117397   11108 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:44:17.117770   11108 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:44:17.118416   11108 certs.go:311] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\client.key
	I0219 04:44:17.118726   11108 certs.go:311] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\apiserver.key.fd17deda
	I0219 04:44:17.119084   11108 certs.go:311] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\proxy-client.key
	I0219 04:44:17.120147   11108 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:44:17.120360   11108 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:44:17.120360   11108 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:44:17.120360   11108 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:44:17.120927   11108 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:44:17.121291   11108 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:44:17.121833   11108 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:44:17.123051   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:44:17.168086   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0219 04:44:17.211521   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:44:17.255148   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-061400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0219 04:44:17.308070   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:44:17.350765   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:44:17.431362   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:44:17.489862   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:44:17.537552   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:44:17.586204   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:44:17.637411   11108 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:44:17.686413   11108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:44:17.732941   11108 ssh_runner.go:195] Run: openssl version
	I0219 04:44:17.750094   11108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:44:17.775934   11108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:44:17.784811   11108 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:44:17.797080   11108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:44:17.815997   11108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:44:17.844275   11108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:44:17.874257   11108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:44:17.882655   11108 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:44:17.895109   11108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:44:17.915824   11108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:44:17.952050   11108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:44:17.980042   11108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:44:17.988061   11108 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:44:17.997050   11108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:44:18.019172   11108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:44:18.036509   11108 kubeadm.go:401] StartCluster: {Name:pause-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.26.1 ClusterName:pause-061400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.246.210 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:44:18.044507   11108 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:44:18.100291   11108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:44:18.118935   11108 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0219 04:44:18.118935   11108 kubeadm.go:633] restartCluster start
	I0219 04:44:18.128697   11108 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0219 04:44:18.152928   11108 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:18.154180   11108 kubeconfig.go:92] found "pause-061400" server: "https://172.28.246.210:8443"
	I0219 04:44:18.156996   11108 kapi.go:59] client config for pause-061400: &rest.Config{Host:"https://172.28.246.210:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:44:18.170928   11108 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0219 04:44:18.190460   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:18.204105   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:18.225382   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:18.734884   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:18.745581   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:18.765914   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:19.226542   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:19.236599   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:19.258033   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:19.732736   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:19.744386   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:19.764878   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:20.226607   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:20.238609   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:20.271133   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:20.737188   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:20.749213   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:20.770512   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:21.232173   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:21.241657   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:21.269717   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:21.740827   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:21.752655   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:21.772022   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:22.232458   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:22.244055   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:22.262757   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:22.736843   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:22.748854   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:22.789112   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:23.240325   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:23.252124   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:23.272503   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:23.733059   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:23.743793   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:23.764522   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:24.237702   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:24.247628   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:24.266157   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:24.730759   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:24.741276   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:24.765527   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:25.237985   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:25.249549   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:25.277663   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:25.727533   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:25.740697   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0219 04:44:25.816478   11108 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:26.234423   11108 api_server.go:165] Checking apiserver status ...
	I0219 04:44:26.248591   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:44:26.292745   11108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6864/cgroup
	I0219 04:44:26.309549   11108 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/podfbe347cf50a30a6e5d925d82a5eab233/c68aa6c91f3f14d00b9ca211f2eddf50a7a51a5c27749d1c390715dc864c5a55"
	I0219 04:44:26.323401   11108 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podfbe347cf50a30a6e5d925d82a5eab233/c68aa6c91f3f14d00b9ca211f2eddf50a7a51a5c27749d1c390715dc864c5a55/freezer.state
	I0219 04:44:26.340386   11108 api_server.go:203] freezer state: "THAWED"
	I0219 04:44:26.340386   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:31.352819   11108 api_server.go:268] stopped: https://172.28.246.210:8443/healthz: Get "https://172.28.246.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0219 04:44:31.352885   11108 retry.go:31] will retry after 191.910321ms: state is "Stopped"
	I0219 04:44:31.546164   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:32.319326   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0219 04:44:32.320282   11108 retry.go:31] will retry after 350.251832ms: https://172.28.246.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0219 04:44:32.679932   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:32.709029   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:32.709130   11108 retry.go:31] will retry after 463.357346ms: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:33.184363   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:33.194408   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:33.194408   11108 retry.go:31] will retry after 563.560002ms: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:33.771471   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:33.784494   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:33.784633   11108 retry.go:31] will retry after 751.658374ms: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:34.548808   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:34.561197   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:34.561249   11108 retry.go:31] will retry after 633.793227ms: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:35.197779   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:35.210354   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:35.210354   11108 retry.go:31] will retry after 1.001714765s: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:36.215984   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:36.225191   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:36.225472   11108 retry.go:31] will retry after 1.478937264s: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:37.710862   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:37.720048   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:37.720126   11108 retry.go:31] will retry after 1.261304878s: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:38.995981   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:39.005972   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:39.005972   11108 retry.go:31] will retry after 2.098676523s: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:41.106246   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:41.115454   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:41.115763   11108 kubeadm.go:608] needs reconfigure: apiserver error: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:41.115812   11108 kubeadm.go:1120] stopping kube-system containers ...
	I0219 04:44:41.127215   11108 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:44:41.175517   11108 docker.go:456] Stopping containers: [7e146eb9b748 fa50d192f649 a0afef71105f c68aa6c91f3f c80aca6e1a30 80b34d8effdc 9a8d52471ec0 20ba1b6e4482 864e078083a6 15064c3c6813 6da63f0b880e 238e4cc4e697 cb12b8aae594 deb8e4482cca 66f405c5eb82 7c6bf0b18332 bb119d3f422a b6699319530d 5864b4e9f767 a73c696cbf80 d9f886c8b3f1 3cdf0f7fc6f8 aecde6be832f 575d786445b7]
	I0219 04:44:41.184100   11108 ssh_runner.go:195] Run: docker stop 7e146eb9b748 fa50d192f649 a0afef71105f c68aa6c91f3f c80aca6e1a30 80b34d8effdc 9a8d52471ec0 20ba1b6e4482 864e078083a6 15064c3c6813 6da63f0b880e 238e4cc4e697 cb12b8aae594 deb8e4482cca 66f405c5eb82 7c6bf0b18332 bb119d3f422a b6699319530d 5864b4e9f767 a73c696cbf80 d9f886c8b3f1 3cdf0f7fc6f8 aecde6be832f 575d786445b7
	I0219 04:44:47.213662   11108 ssh_runner.go:235] Completed: docker stop 7e146eb9b748 fa50d192f649 a0afef71105f c68aa6c91f3f c80aca6e1a30 80b34d8effdc 9a8d52471ec0 20ba1b6e4482 864e078083a6 15064c3c6813 6da63f0b880e 238e4cc4e697 cb12b8aae594 deb8e4482cca 66f405c5eb82 7c6bf0b18332 bb119d3f422a b6699319530d 5864b4e9f767 a73c696cbf80 d9f886c8b3f1 3cdf0f7fc6f8 aecde6be832f 575d786445b7: (6.0295823s)
	I0219 04:44:47.223763   11108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0219 04:44:47.282009   11108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:44:47.298150   11108 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 19 04:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Feb 19 04:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 19 04:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Feb 19 04:41 /etc/kubernetes/scheduler.conf
	
	I0219 04:44:47.307607   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0219 04:44:47.338086   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0219 04:44:47.363446   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0219 04:44:47.378745   11108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:47.389393   11108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0219 04:44:47.413775   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0219 04:44:47.427857   11108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:47.436362   11108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0219 04:44:47.459844   11108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:44:47.476446   11108 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0219 04:44:47.476497   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:47.572429   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.433259   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.776226   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.873899   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.995346   11108 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:44:49.006164   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:44:49.029409   11108 api_server.go:71] duration metric: took 34.0629ms to wait for apiserver process to appear ...
	I0219 04:44:49.029409   11108 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:44:49.029409   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:54.041283   11108 api_server.go:268] stopped: https://172.28.246.210:8443/healthz: Get "https://172.28.246.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0219 04:44:54.552864   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:54.649711   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0219 04:44:54.649711   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0219 04:44:55.044032   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:55.054447   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:44:55.054447   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:55.549821   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:55.564375   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:44:55.564375   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:56.053301   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:56.061964   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 200:
	ok
	I0219 04:44:56.081908   11108 api_server.go:140] control plane version: v1.26.1
	I0219 04:44:56.082027   11108 api_server.go:130] duration metric: took 7.0526432s to wait for apiserver health ...
	I0219 04:44:56.082165   11108 cni.go:84] Creating CNI manager for ""
	I0219 04:44:56.082165   11108 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:44:56.085258   11108 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:44:56.096851   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:44:56.113341   11108 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:44:56.144521   11108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:44:56.159074   11108 system_pods.go:59] 6 kube-system pods found
	I0219 04:44:56.159140   11108 system_pods.go:61] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:44:56.159169   11108 system_pods.go:61] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:44:56.159169   11108 system_pods.go:61] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0219 04:44:56.159169   11108 system_pods.go:61] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0219 04:44:56.159214   11108 system_pods.go:61] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0219 04:44:56.159214   11108 system_pods.go:61] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0219 04:44:56.159214   11108 system_pods.go:74] duration metric: took 14.6932ms to wait for pod list to return data ...
	I0219 04:44:56.159244   11108 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:44:56.172932   11108 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:44:56.173105   11108 node_conditions.go:123] node cpu capacity is 2
	I0219 04:44:56.173105   11108 node_conditions.go:105] duration metric: took 13.861ms to run NodePressure ...
	I0219 04:44:56.173105   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:56.697644   11108 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0219 04:44:56.710678   11108 kubeadm.go:784] kubelet initialised
	I0219 04:44:56.710723   11108 kubeadm.go:785] duration metric: took 13.0798ms waiting for restarted kubelet to initialise ...
	I0219 04:44:56.710751   11108 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:44:56.725013   11108 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:56.759805   11108 pod_ready.go:92] pod "coredns-787d4945fb-mjptj" in "kube-system" namespace has status "Ready":"True"
	I0219 04:44:56.759805   11108 pod_ready.go:81] duration metric: took 34.7536ms waiting for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:56.759906   11108 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:58.810430   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:01.302021   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:03.307911   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:05.308148   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:07.309416   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:07.800023   11108 pod_ready.go:92] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.800023   11108 pod_ready.go:81] duration metric: took 11.0401552s waiting for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.800023   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.815009   11108 pod_ready.go:92] pod "kube-apiserver-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.815009   11108 pod_ready.go:81] duration metric: took 14.9857ms waiting for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.815009   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.824030   11108 pod_ready.go:92] pod "kube-controller-manager-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.824030   11108 pod_ready.go:81] duration metric: took 9.0219ms waiting for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.824030   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.835012   11108 pod_ready.go:92] pod "kube-proxy-mgb72" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.835012   11108 pod_ready.go:81] duration metric: took 10.9817ms waiting for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.835012   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.843013   11108 pod_ready.go:92] pod "kube-scheduler-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.844056   11108 pod_ready.go:81] duration metric: took 9.0436ms waiting for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.844056   11108 pod_ready.go:38] duration metric: took 11.1333434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:07.844056   11108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:45:07.865633   11108 ops.go:34] apiserver oom_adj: -16
	I0219 04:45:07.865633   11108 kubeadm.go:637] restartCluster took 49.7468737s
	I0219 04:45:07.865633   11108 kubeadm.go:403] StartCluster complete in 49.8293001s
	I0219 04:45:07.865633   11108 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:07.865633   11108 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:45:07.867640   11108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:07.869661   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:45:07.869661   11108 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:45:07.873626   11108 out.go:177] * Enabled addons: 
	I0219 04:45:07.869661   11108 config.go:182] Loaded profile config "pause-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:45:07.875636   11108 addons.go:492] enable addons completed in 5.9748ms: enabled=[]
	I0219 04:45:07.878671   11108 kapi.go:59] client config for pause-061400: &rest.Config{Host:"https://172.28.246.210:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:45:07.885640   11108 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-061400" context rescaled to 1 replicas
	I0219 04:45:07.885640   11108 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.246.210 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:45:07.887626   11108 out.go:177] * Verifying Kubernetes components...
	I0219 04:45:07.901637   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:45:08.040097   11108 node_ready.go:35] waiting up to 6m0s for node "pause-061400" to be "Ready" ...
	I0219 04:45:08.041111   11108 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0219 04:45:08.046088   11108 node_ready.go:49] node "pause-061400" has status "Ready":"True"
	I0219 04:45:08.046088   11108 node_ready.go:38] duration metric: took 4.9772ms waiting for node "pause-061400" to be "Ready" ...
	I0219 04:45:08.047114   11108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:08.204562   11108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:08.600526   11108 pod_ready.go:92] pod "coredns-787d4945fb-mjptj" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:08.601519   11108 pod_ready.go:81] duration metric: took 396.9585ms waiting for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:08.601519   11108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.001285   11108 pod_ready.go:92] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.001285   11108 pod_ready.go:81] duration metric: took 399.7671ms waiting for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.001285   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.411539   11108 pod_ready.go:92] pod "kube-apiserver-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.411539   11108 pod_ready.go:81] duration metric: took 410.2559ms waiting for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.411539   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.802176   11108 pod_ready.go:92] pod "kube-controller-manager-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.802176   11108 pod_ready.go:81] duration metric: took 390.6384ms waiting for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.802176   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.203669   11108 pod_ready.go:92] pod "kube-proxy-mgb72" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:10.203669   11108 pod_ready.go:81] duration metric: took 401.4946ms waiting for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.203736   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.599228   11108 pod_ready.go:92] pod "kube-scheduler-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:10.599228   11108 pod_ready.go:81] duration metric: took 395.494ms waiting for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.599228   11108 pod_ready.go:38] duration metric: took 2.5521234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:10.599228   11108 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:45:10.610148   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:45:10.637038   11108 api_server.go:71] duration metric: took 2.7514082s to wait for apiserver process to appear ...
	I0219 04:45:10.637038   11108 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:45:10.637038   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:45:10.647517   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 200:
	ok
	I0219 04:45:10.649514   11108 api_server.go:140] control plane version: v1.26.1
	I0219 04:45:10.649514   11108 api_server.go:130] duration metric: took 12.4755ms to wait for apiserver health ...
	I0219 04:45:10.649514   11108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:45:10.811236   11108 system_pods.go:59] 6 kube-system pods found
	I0219 04:45:10.811236   11108 system_pods.go:61] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running
	I0219 04:45:10.811236   11108 system_pods.go:74] duration metric: took 161.7234ms to wait for pod list to return data ...
	I0219 04:45:10.811236   11108 default_sa.go:34] waiting for default service account to be created ...
	I0219 04:45:10.997573   11108 default_sa.go:45] found service account: "default"
	I0219 04:45:10.997671   11108 default_sa.go:55] duration metric: took 186.4355ms for default service account to be created ...
	I0219 04:45:10.997671   11108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0219 04:45:11.249598   11108 system_pods.go:86] 6 kube-system pods found
	I0219 04:45:11.249660   11108 system_pods.go:89] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running
	I0219 04:45:11.249723   11108 system_pods.go:126] duration metric: took 252.0526ms to wait for k8s-apps to be running ...
	I0219 04:45:11.249756   11108 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:45:11.260020   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:45:11.282404   11108 system_svc.go:56] duration metric: took 32.6482ms WaitForService to wait for kubelet.
	I0219 04:45:11.282470   11108 kubeadm.go:578] duration metric: took 3.3968425s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:45:11.282470   11108 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:45:11.494654   11108 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:45:11.494654   11108 node_conditions.go:123] node cpu capacity is 2
	I0219 04:45:11.494654   11108 node_conditions.go:105] duration metric: took 212.1846ms to run NodePressure ...
	I0219 04:45:11.494654   11108 start.go:228] waiting for startup goroutines ...
	I0219 04:45:11.494654   11108 start.go:233] waiting for cluster config update ...
	I0219 04:45:11.494654   11108 start.go:242] writing updated cluster config ...
	I0219 04:45:11.506084   11108 ssh_runner.go:195] Run: rm -f paused
	I0219 04:45:11.716356   11108 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:45:11.786610   11108 out.go:177] 
	W0219 04:45:11.938234   11108 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:45:12.039082   11108 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:45:12.233949   11108 out.go:177] * Done! kubectl is now configured to use "pause-061400" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061400 -n pause-061400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061400 -n pause-061400: (5.1200677s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-061400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-061400 logs -n 25: (6.1543974s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-843300 sudo cat              | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /lib/systemd/system/containerd.service |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat              | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/containerd/config.toml            |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                  | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | containerd config dump                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                  | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status crio --all            |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                  | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat crio --no-pager          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo find             | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo crio             | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | config                                 |                           |                   |         |                     |                     |
	| delete  | -p cilium-843300                       | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT | 19 Feb 23 04:33 GMT |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT | 19 Feb 23 04:39 GMT |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-928900              | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:35 GMT |
	|         | ssh docker info --format               |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                      |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-928900           | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:36 GMT |
	| delete  | -p offline-docker-928900               | offline-docker-928900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:36 GMT | 19 Feb 23 04:37 GMT |
	| delete  | -p NoKubernetes-928900                 | NoKubernetes-928900       | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:38 GMT | 19 Feb 23 04:39 GMT |
	| start   | -p pause-061400 --memory=2048          | pause-061400              | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:39 GMT | 19 Feb 23 04:41 GMT |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv             |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:39 GMT | 19 Feb 23 04:40 GMT |
	| start   | -p stopped-upgrade-608000              | stopped-upgrade-608000    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:39 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:40 GMT | 19 Feb 23 04:43 GMT |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.1           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-940200              | running-upgrade-940200    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:41 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p pause-061400                        | pause-061400              | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:41 GMT | 19 Feb 23 04:45 GMT |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-608000              | stopped-upgrade-608000    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:42 GMT | 19 Feb 23 04:42 GMT |
	| start   | -p cert-expiration-011800              | cert-expiration-011800    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:42 GMT |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:43 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:43 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.1           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-940200              | running-upgrade-940200    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:44 GMT | 19 Feb 23 04:44 GMT |
	| start   | -p docker-flags-045000                 | docker-flags-045000       | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:44 GMT |                     |
	|         | --cache-images=false                   |                           |                   |         |                     |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=false                           |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                   |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                     |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 04:44:44
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 04:44:44.109149    3548 out.go:296] Setting OutFile to fd 1672 ...
	I0219 04:44:44.177479    3548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:44:44.177479    3548 out.go:309] Setting ErrFile to fd 1760...
	I0219 04:44:44.177479    3548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:44:44.205566    3548 out.go:303] Setting JSON to false
	I0219 04:44:44.209017    3548 start.go:125] hostinfo: {"hostname":"minikube1","uptime":18873,"bootTime":1676763010,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:44:44.210007    3548 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:44:44.223270    3548 out.go:177] * [docker-flags-045000] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:44:44.231122    3548 notify.go:220] Checking for updates...
	I0219 04:44:44.238080    3548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:44:44.253070    3548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:44:44.258069    3548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:44:44.263068    3548 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:44:44.273094    3548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:44:41.720207   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-011800
	
	I0219 04:44:41.720207   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:42.514034   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:42.514234   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:42.514234   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:43.643724   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:43.643920   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:43.648244   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:43.649015   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:43.649015   10072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-011800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-011800/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-011800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:44:43.824966   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:44:43.824966   10072 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:44:43.824966   10072 buildroot.go:174] setting up certificates
	I0219 04:44:43.824966   10072 provision.go:83] configureAuth start
	I0219 04:44:43.824966   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:44.668810   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:44.668810   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:44.668810   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:45.779351   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:45.779396   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:45.779396   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:44.279087    3548 config.go:182] Loaded profile config "cert-expiration-011800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:44.280094    3548 config.go:182] Loaded profile config "kubernetes-upgrade-803700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:44.280094    3548 config.go:182] Loaded profile config "pause-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:44.280094    3548 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:44:46.035036    3548 out.go:177] * Using the hyperv driver based on user configuration
	I0219 04:44:46.036846    3548 start.go:296] selected driver: hyperv
	I0219 04:44:46.040211    3548 start.go:857] validating driver "hyperv" against <nil>
	I0219 04:44:46.040211    3548 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:44:46.090546    3548 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0219 04:44:46.091547    3548 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0219 04:44:46.091547    3548 cni.go:84] Creating CNI manager for ""
	I0219 04:44:46.091547    3548 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:44:46.091547    3548 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0219 04:44:46.091547    3548 start_flags.go:319] config:
	{Name:docker-flags-045000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:docker-flags-045000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:44:46.091547    3548 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:44:46.095338    3548 out.go:177] * Starting control plane node docker-flags-045000 in cluster docker-flags-045000
	I0219 04:44:47.213662   11108 ssh_runner.go:235] Completed: docker stop 7e146eb9b748 fa50d192f649 a0afef71105f c68aa6c91f3f c80aca6e1a30 80b34d8effdc 9a8d52471ec0 20ba1b6e4482 864e078083a6 15064c3c6813 6da63f0b880e 238e4cc4e697 cb12b8aae594 deb8e4482cca 66f405c5eb82 7c6bf0b18332 bb119d3f422a b6699319530d 5864b4e9f767 a73c696cbf80 d9f886c8b3f1 3cdf0f7fc6f8 aecde6be832f 575d786445b7: (6.0295823s)
	I0219 04:44:47.223763   11108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0219 04:44:47.282009   11108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:44:47.298150   11108 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 19 04:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Feb 19 04:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 19 04:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Feb 19 04:41 /etc/kubernetes/scheduler.conf
	
	I0219 04:44:47.307607   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0219 04:44:47.338086   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0219 04:44:47.363446   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0219 04:44:46.100482    3548 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:44:46.100807    3548 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0219 04:44:46.100807    3548 cache.go:57] Caching tarball of preloaded images
	I0219 04:44:46.100965    3548 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:44:46.100965    3548 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 04:44:46.101507    3548 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\docker-flags-045000\config.json ...
	I0219 04:44:46.101772    3548 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\docker-flags-045000\config.json: {Name:mkadbdd8f4461eea2d4941f45670077824768adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:44:46.101772    3548 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:44:46.101772    3548 start.go:364] acquiring machines lock for docker-flags-045000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:44:46.544622   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:46.544622   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:46.544848   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:47.602216   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:47.602216   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:47.602216   10072 provision.go:138] copyHostCerts
	I0219 04:44:47.602216   10072 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:44:47.602216   10072 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:44:47.602216   10072 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:44:47.604493   10072 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:44:47.604493   10072 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:44:47.604858   10072 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:44:47.605853   10072 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:44:47.605853   10072 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:44:47.606222   10072 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:44:47.607483   10072 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-011800 san=[172.28.248.128 172.28.248.128 localhost 127.0.0.1 minikube cert-expiration-011800]
	I0219 04:44:47.965443   10072 provision.go:172] copyRemoteCerts
	I0219 04:44:47.974446   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:44:47.974446   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:48.735462   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:48.735462   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:48.735538   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:49.811784   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:49.811830   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:49.812006   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:44:49.956222   10072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9817832s)
	I0219 04:44:49.956713   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:44:50.011153   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0219 04:44:50.054662   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 04:44:50.098679   10072 provision.go:86] duration metric: configureAuth took 6.2737353s
	I0219 04:44:50.098679   10072 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:44:50.099630   10072 config.go:182] Loaded profile config "cert-expiration-011800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:50.099630   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:50.858158   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:50.858158   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:50.858361   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:47.378745   11108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:47.389393   11108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0219 04:44:47.413775   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0219 04:44:47.427857   11108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:47.436362   11108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0219 04:44:47.459844   11108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:44:47.476446   11108 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0219 04:44:47.476497   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:47.572429   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.433259   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.776226   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.873899   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.995346   11108 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:44:49.006164   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:44:49.029409   11108 api_server.go:71] duration metric: took 34.0629ms to wait for apiserver process to appear ...
	I0219 04:44:49.029409   11108 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:44:49.029409   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:51.893188   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:51.893244   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:51.897352   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:51.898094   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:51.898094   10072 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:44:52.061273   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:44:52.061273   10072 buildroot.go:70] root file system type: tmpfs
	I0219 04:44:52.061456   10072 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:44:52.061524   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:52.812618   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:52.812666   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:52.812698   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:53.866667   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:53.866667   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:53.869522   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:53.870523   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:53.870523   10072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:44:54.061545   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:44:54.061545   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:54.822314   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:54.822314   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:54.822480   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:55.866514   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:55.866718   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:55.871698   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:55.872594   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:55.872594   10072 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:44:54.041283   11108 api_server.go:268] stopped: https://172.28.246.210:8443/healthz: Get "https://172.28.246.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0219 04:44:54.552864   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:54.649711   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0219 04:44:54.649711   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0219 04:44:55.044032   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:55.054447   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:44:55.054447   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:55.549821   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:55.564375   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:44:55.564375   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:56.053301   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:56.061964   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 200:
	ok
	I0219 04:44:56.081908   11108 api_server.go:140] control plane version: v1.26.1
	I0219 04:44:56.082027   11108 api_server.go:130] duration metric: took 7.0526432s to wait for apiserver health ...
	I0219 04:44:56.082165   11108 cni.go:84] Creating CNI manager for ""
	I0219 04:44:56.082165   11108 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:44:56.085258   11108 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:44:56.096851   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:44:56.113341   11108 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:44:56.144521   11108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:44:56.159074   11108 system_pods.go:59] 6 kube-system pods found
	I0219 04:44:56.159140   11108 system_pods.go:61] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:44:56.159169   11108 system_pods.go:61] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:44:56.159169   11108 system_pods.go:61] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0219 04:44:56.159169   11108 system_pods.go:61] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0219 04:44:56.159214   11108 system_pods.go:61] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0219 04:44:56.159214   11108 system_pods.go:61] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0219 04:44:56.159214   11108 system_pods.go:74] duration metric: took 14.6932ms to wait for pod list to return data ...
	I0219 04:44:56.159244   11108 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:44:56.172932   11108 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:44:56.173105   11108 node_conditions.go:123] node cpu capacity is 2
	I0219 04:44:56.173105   11108 node_conditions.go:105] duration metric: took 13.861ms to run NodePressure ...
	I0219 04:44:56.173105   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:56.697644   11108 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0219 04:44:56.710678   11108 kubeadm.go:784] kubelet initialised
	I0219 04:44:56.710723   11108 kubeadm.go:785] duration metric: took 13.0798ms waiting for restarted kubelet to initialise ...
	I0219 04:44:56.710751   11108 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:44:56.725013   11108 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:56.759805   11108 pod_ready.go:92] pod "coredns-787d4945fb-mjptj" in "kube-system" namespace has status "Ready":"True"
	I0219 04:44:56.759805   11108 pod_ready.go:81] duration metric: took 34.7536ms waiting for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:56.759906   11108 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:57.130646   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:44:57.130646   10072 machine.go:91] provisioned docker machine in 20.5168582s
	I0219 04:44:57.130646   10072 client.go:171] LocalClient.Create took 1m7.8090574s
	I0219 04:44:57.130646   10072 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-011800" took 1m7.8090574s
	I0219 04:44:57.130646   10072 start.go:300] post-start starting for "cert-expiration-011800" (driver="hyperv")
	I0219 04:44:57.130646   10072 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:44:57.141774   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:44:57.141774   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:57.902065   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:57.902065   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:57.902065   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:58.942449   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:58.942449   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:58.943057   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:44:59.050005   10072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9081345s)
	I0219 04:44:59.060688   10072 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:44:59.069091   10072 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:44:59.069091   10072 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:44:59.069435   10072 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:44:59.070481   10072 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:44:59.081690   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:44:59.098418   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:44:59.152917   10072 start.go:303] post-start completed in 2.0222775s
	I0219 04:44:59.155958   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:59.883272   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:59.883342   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:59.883342   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:00.911035   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:00.911035   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:00.911418   10072 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\config.json ...
	I0219 04:45:00.914717   10072 start.go:128] duration metric: createHost completed in 1m11.5964913s
	I0219 04:45:00.914776   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:58.810430   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:01.302021   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:02.878735    4660 start.go:368] acquired machines lock for "kubernetes-upgrade-803700" in 1m18.9382132s
	I0219 04:45:02.879019    4660 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:45:02.879019    4660 fix.go:55] fixHost starting: 
	I0219 04:45:02.879758    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:03.648880    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:03.649149    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:03.649149    4660 fix.go:103] recreateIfNeeded on kubernetes-upgrade-803700: state=Running err=<nil>
	W0219 04:45:03.649149    4660 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:45:03.655474    4660 out.go:177] * Updating the running hyperv "kubernetes-upgrade-803700" VM ...
	I0219 04:45:01.655129   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:01.655129   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:01.655511   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:02.714243   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:02.714243   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:02.718947   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:02.719761   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:45:02.719761   10072 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:45:02.878400   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781902.871976388
	
	I0219 04:45:02.878400   10072 fix.go:207] guest clock: 1676781902.871976388
	I0219 04:45:02.878400   10072 fix.go:220] Guest: 2023-02-19 04:45:02.871976388 +0000 GMT Remote: 2023-02-19 04:45:00.9147176 +0000 GMT m=+155.178942301 (delta=1.957258788s)
	I0219 04:45:02.878400   10072 fix.go:191] guest clock delta is within tolerance: 1.957258788s
	I0219 04:45:02.878495   10072 start.go:83] releasing machines lock for "cert-expiration-011800", held for 1m13.5613659s
	I0219 04:45:02.878632   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:45:03.664974   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:03.664974   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:03.664974   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:04.725489   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:04.725489   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:04.729022   10072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:45:04.729022   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:45:04.736633   10072 ssh_runner.go:195] Run: cat /version.json
	I0219 04:45:04.736633   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:45:05.522128   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:05.522316   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:05.522316   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:05.522818   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:05.522894   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:05.522894   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:03.657933    4660 machine.go:88] provisioning docker machine ...
	I0219 04:45:03.657933    4660 buildroot.go:166] provisioning hostname "kubernetes-upgrade-803700"
	I0219 04:45:03.657933    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:04.427124    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:04.427453    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:04.427523    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:05.522128    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:05.522316    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:05.525837    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:05.527224    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:05.527275    4660 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-803700 && echo "kubernetes-upgrade-803700" | sudo tee /etc/hostname
	I0219 04:45:05.720122    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803700
	
	I0219 04:45:05.720197    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:06.472623    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:06.472623    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:06.472737    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:03.307911   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:05.308148   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:07.309416   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:07.800023   11108 pod_ready.go:92] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.800023   11108 pod_ready.go:81] duration metric: took 11.0401552s waiting for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.800023   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.815009   11108 pod_ready.go:92] pod "kube-apiserver-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.815009   11108 pod_ready.go:81] duration metric: took 14.9857ms waiting for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.815009   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.824030   11108 pod_ready.go:92] pod "kube-controller-manager-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.824030   11108 pod_ready.go:81] duration metric: took 9.0219ms waiting for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.824030   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.835012   11108 pod_ready.go:92] pod "kube-proxy-mgb72" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.835012   11108 pod_ready.go:81] duration metric: took 10.9817ms waiting for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.835012   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.843013   11108 pod_ready.go:92] pod "kube-scheduler-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.844056   11108 pod_ready.go:81] duration metric: took 9.0436ms waiting for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.844056   11108 pod_ready.go:38] duration metric: took 11.1333434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:07.844056   11108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:45:07.865633   11108 ops.go:34] apiserver oom_adj: -16
	I0219 04:45:07.865633   11108 kubeadm.go:637] restartCluster took 49.7468737s
	I0219 04:45:07.865633   11108 kubeadm.go:403] StartCluster complete in 49.8293001s
	I0219 04:45:07.865633   11108 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:07.865633   11108 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:45:07.867640   11108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:07.869661   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:45:07.869661   11108 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:45:07.873626   11108 out.go:177] * Enabled addons: 
	I0219 04:45:07.869661   11108 config.go:182] Loaded profile config "pause-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:45:07.875636   11108 addons.go:492] enable addons completed in 5.9748ms: enabled=[]
	I0219 04:45:07.878671   11108 kapi.go:59] client config for pause-061400: &rest.Config{Host:"https://172.28.246.210:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:45:07.885640   11108 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-061400" context rescaled to 1 replicas
	I0219 04:45:07.885640   11108 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.246.210 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:45:07.887626   11108 out.go:177] * Verifying Kubernetes components...
	I0219 04:45:06.664528   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:06.664528   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:06.664965   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:45:06.685119   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:06.685119   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:06.685119   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:45:06.883984   10072 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1549697s)
	I0219 04:45:06.884108   10072 ssh_runner.go:235] Completed: cat /version.json: (2.1474191s)
	I0219 04:45:06.894904   10072 ssh_runner.go:195] Run: systemctl --version
	I0219 04:45:06.912828   10072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:45:06.921225   10072 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:45:06.931446   10072 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:45:06.948787   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:45:06.965033   10072 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:45:07.004765   10072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:45:07.031165   10072 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:45:07.031165   10072 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:45:07.040090   10072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:45:07.074978   10072 docker.go:630] Got preloaded images: 
	I0219 04:45:07.074978   10072 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:45:07.084555   10072 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:45:07.113541   10072 ssh_runner.go:195] Run: which lz4
	I0219 04:45:07.128546   10072 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:45:07.133556   10072 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:45:07.134554   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:45:09.535999   10072 docker.go:594] Took 2.416131 seconds to copy over tarball
	I0219 04:45:09.546517   10072 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:45:07.901637   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:45:08.040097   11108 node_ready.go:35] waiting up to 6m0s for node "pause-061400" to be "Ready" ...
	I0219 04:45:08.041111   11108 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0219 04:45:08.046088   11108 node_ready.go:49] node "pause-061400" has status "Ready":"True"
	I0219 04:45:08.046088   11108 node_ready.go:38] duration metric: took 4.9772ms waiting for node "pause-061400" to be "Ready" ...
	I0219 04:45:08.047114   11108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:08.204562   11108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:08.600526   11108 pod_ready.go:92] pod "coredns-787d4945fb-mjptj" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:08.601519   11108 pod_ready.go:81] duration metric: took 396.9585ms waiting for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:08.601519   11108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.001285   11108 pod_ready.go:92] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.001285   11108 pod_ready.go:81] duration metric: took 399.7671ms waiting for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.001285   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.411539   11108 pod_ready.go:92] pod "kube-apiserver-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.411539   11108 pod_ready.go:81] duration metric: took 410.2559ms waiting for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.411539   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.802176   11108 pod_ready.go:92] pod "kube-controller-manager-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.802176   11108 pod_ready.go:81] duration metric: took 390.6384ms waiting for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.802176   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.203669   11108 pod_ready.go:92] pod "kube-proxy-mgb72" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:10.203669   11108 pod_ready.go:81] duration metric: took 401.4946ms waiting for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.203736   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.599228   11108 pod_ready.go:92] pod "kube-scheduler-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:10.599228   11108 pod_ready.go:81] duration metric: took 395.494ms waiting for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.599228   11108 pod_ready.go:38] duration metric: took 2.5521234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:10.599228   11108 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:45:10.610148   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:45:10.637038   11108 api_server.go:71] duration metric: took 2.7514082s to wait for apiserver process to appear ...
	I0219 04:45:10.637038   11108 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:45:10.637038   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:45:10.647517   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 200:
	ok
	I0219 04:45:10.649514   11108 api_server.go:140] control plane version: v1.26.1
	I0219 04:45:10.649514   11108 api_server.go:130] duration metric: took 12.4755ms to wait for apiserver health ...
	I0219 04:45:10.649514   11108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:45:10.811236   11108 system_pods.go:59] 6 kube-system pods found
	I0219 04:45:10.811236   11108 system_pods.go:61] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running
	I0219 04:45:10.811236   11108 system_pods.go:74] duration metric: took 161.7234ms to wait for pod list to return data ...
	I0219 04:45:10.811236   11108 default_sa.go:34] waiting for default service account to be created ...
	I0219 04:45:10.997573   11108 default_sa.go:45] found service account: "default"
	I0219 04:45:10.997671   11108 default_sa.go:55] duration metric: took 186.4355ms for default service account to be created ...
	I0219 04:45:10.997671   11108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0219 04:45:11.249598   11108 system_pods.go:86] 6 kube-system pods found
	I0219 04:45:11.249660   11108 system_pods.go:89] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running
	I0219 04:45:11.249723   11108 system_pods.go:126] duration metric: took 252.0526ms to wait for k8s-apps to be running ...
	I0219 04:45:11.249756   11108 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:45:11.260020   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:45:11.282404   11108 system_svc.go:56] duration metric: took 32.6482ms WaitForService to wait for kubelet.
	I0219 04:45:11.282470   11108 kubeadm.go:578] duration metric: took 3.3968425s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:45:11.282470   11108 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:45:11.494654   11108 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:45:11.494654   11108 node_conditions.go:123] node cpu capacity is 2
	I0219 04:45:11.494654   11108 node_conditions.go:105] duration metric: took 212.1846ms to run NodePressure ...
	I0219 04:45:11.494654   11108 start.go:228] waiting for startup goroutines ...
	I0219 04:45:11.494654   11108 start.go:233] waiting for cluster config update ...
	I0219 04:45:11.494654   11108 start.go:242] writing updated cluster config ...
	I0219 04:45:11.506084   11108 ssh_runner.go:195] Run: rm -f paused
	I0219 04:45:11.716356   11108 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:45:11.786610   11108 out.go:177] 
	W0219 04:45:11.938234   11108 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:45:12.039082   11108 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:45:07.601296    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:07.601296    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:07.607288    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:07.608304    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:07.608304    4660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-803700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-803700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-803700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:45:07.815009    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:45:07.815009    4660 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:45:07.815009    4660 buildroot.go:174] setting up certificates
	I0219 04:45:07.815009    4660 provision.go:83] configureAuth start
	I0219 04:45:07.815009    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:08.627521    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:08.627521    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:08.627521    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:09.813004    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:09.813061    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:09.813061    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:10.562150    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:10.562150    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:10.562150    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:11.653169    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:11.653326    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:11.653326    4660 provision.go:138] copyHostCerts
	I0219 04:45:11.653735    4660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:45:11.653735    4660 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:45:11.654152    4660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:45:11.655463    4660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:45:11.655532    4660 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:45:11.655923    4660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:45:11.657162    4660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:45:11.657235    4660 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:45:11.657537    4660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:45:11.658627    4660 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-803700 san=[172.28.251.111 172.28.251.111 localhost 127.0.0.1 minikube kubernetes-upgrade-803700]
	I0219 04:45:11.877665    4660 provision.go:172] copyRemoteCerts
	I0219 04:45:11.888508    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:45:11.888508    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:12.233949   11108 out.go:177] * Done! kubectl is now configured to use "pause-061400" cluster and "default" namespace by default
	I0219 04:45:12.641892    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:12.641928    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:12.641987    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:13.803165    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:13.803165    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:13.803773    4660 sshutil.go:53] new ssh client: &{IP:172.28.251.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\id_rsa Username:docker}
	I0219 04:45:13.917309    4660 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.0288086s)
	I0219 04:45:13.917870    4660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0219 04:45:13.971591    4660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 04:45:14.020534    4660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:45:14.065439    4660 provision.go:86] duration metric: configureAuth took 6.2504525s
	I0219 04:45:14.065439    4660 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:45:14.066113    4660 config.go:182] Loaded profile config "kubernetes-upgrade-803700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:45:14.066113    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:14.911825    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:14.911825    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:14.911825    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:16.022891    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:16.022991    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:16.027596    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:16.028888    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:16.028961    4660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:45:16.188533    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:45:16.188533    4660 buildroot.go:70] root file system type: tmpfs
	I0219 04:45:16.189124    4660 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:45:16.189185    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:16.954593    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:16.954593    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:16.954729    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-02-19 04:40:04 UTC, ends at Sun 2023-02-19 04:45:23 UTC. --
	Feb 19 04:44:47 pause-061400 dockerd[5477]: time="2023-02-19T04:44:47.151467615Z" level=warning msg="cleanup warnings time=\"2023-02-19T04:44:47Z\" level=info msg=\"starting signal loop\" namespace=moby pid=7994 runtime=io.containerd.runc.v2\n"
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.967716322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.967856021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.967875321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.970165814Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f6fd40251a2cd10d00add36019ae5440f8b22340454e36cac5db6c0a8d14de5a pid=8251 runtime=io.containerd.runc.v2
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031507720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031607820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031638120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031948619Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1ea4dbac71cbdbcf3c31ce965eb41575e22631dedaea2d6968d45f9ace4730d pid=8295 runtime=io.containerd.runc.v2
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031292121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.033345815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.033428614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.034351811Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/175333ff2e0487da9992eec10020ddf1ed0f5ac49cd410fcad052c6d1b6aaae0 pid=8291 runtime=io.containerd.runc.v2
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.322916347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.322973947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.322986547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.323187246Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/eea008990fc5459c7dc4700c8c5b0fe979de298d2eaa68b2d5f5757cb63959d6 pid=8473 runtime=io.containerd.runc.v2
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.808387266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.808457566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.808471165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.811328557Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1ac745449796840c61516af32333a6452ed4f81ef04f4bd42071ef043d237531 pid=8565 runtime=io.containerd.runc.v2
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736117159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736178659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736207859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736729258Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a13216268eec03b0be16bee037cde662a01c41c63fab4cbba7bd36383004165f pid=8674 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a13216268eec0       5185b96f0becf       26 seconds ago       Running             coredns                   2                   1ac7454497968
	eea008990fc54       46a6bb3c77ce0       27 seconds ago       Running             kube-proxy                2                   45ac4e42f0ee5
	175333ff2e048       655493523f607       34 seconds ago       Running             kube-scheduler            3                   42ece85db173c
	f1ea4dbac71cb       e9c08e11b07f6       34 seconds ago       Running             kube-controller-manager   3                   f41416dad1e91
	f6fd40251a2cd       fce326961ae2d       34 seconds ago       Running             etcd                      3                   abd19d387f579
	2b3c01bd2cb0b       deb04688c4a35       38 seconds ago       Running             kube-apiserver            3                   3e7d944f36e27
	7e146eb9b7480       5185b96f0becf       54 seconds ago       Exited              coredns                   1                   a0afef71105f2
	fa50d192f6494       655493523f607       54 seconds ago       Exited              kube-scheduler            2                   6da63f0b880ee
	c68aa6c91f3f1       deb04688c4a35       About a minute ago   Exited              kube-apiserver            2                   238e4cc4e6973
	c80aca6e1a30a       46a6bb3c77ce0       About a minute ago   Exited              kube-proxy                1                   20ba1b6e44821
	80b34d8effdc5       fce326961ae2d       About a minute ago   Exited              etcd                      2                   864e078083a69
	9a8d52471ec08       e9c08e11b07f6       About a minute ago   Exited              kube-controller-manager   2                   15064c3c6813d
	
	* 
	* ==> coredns [7e146eb9b748] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = dc373b1a880fdd4ccb700cff30600cc4bf8c50378309c853254a8500867351a3e9142cc9578843a443961b28e6690d646b490f89e043822a41fbe79aabc9a951
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:46953 - 36995 "HINFO IN 1726034603133606588.8159892721735991185. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032888463s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [a13216268eec] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = dc373b1a880fdd4ccb700cff30600cc4bf8c50378309c853254a8500867351a3e9142cc9578843a443961b28e6690d646b490f89e043822a41fbe79aabc9a951
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:38635 - 39239 "HINFO IN 548711931721667633.127040790714027977. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.026858626s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-061400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-061400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61
	                    minikube.k8s.io/name=pause-061400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_19T04_41_33_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:41:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-061400
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:45:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.246.210
	  Hostname:    pause-061400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 d23f6a8ecf094da2bb8bca5e6922005a
	  System UUID:                a3845a9c-434a-d844-a7a5-67e7ad1bb4c1
	  Boot ID:                    a4f2da02-6178-4141-9bba-a2a84c6dfa59
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-mjptj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m37s
	  kube-system                 etcd-pause-061400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-apiserver-pause-061400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-pause-061400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-mgb72                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-scheduler-pause-061400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m34s                kube-proxy       
	  Normal  Starting                 27s                  kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientPID     4m5s (x6 over 4m5s)  kubelet          Node pause-061400 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m5s (x6 over 4m5s)  kubelet          Node pause-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m5s (x7 over 4m5s)  kubelet          Node pause-061400 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s                kubelet          Node pause-061400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s                kubelet          Node pause-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s                kubelet          Node pause-061400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m47s                kubelet          Node pause-061400 status is now: NodeReady
	  Normal  RegisteredNode           3m37s                node-controller  Node pause-061400 event: Registered Node pause-061400 in Controller
	  Normal  Starting                 35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)    kubelet          Node pause-061400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)    kubelet          Node pause-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x7 over 34s)    kubelet          Node pause-061400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                  node-controller  Node pause-061400 event: Registered Node pause-061400 in Controller
	
	* 
	* ==> dmesg <==
	* [  +2.355292] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.723036] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +0.530854] systemd-fstab-generator[1122]: Ignoring "noauto" for root device
	[  +0.192737] systemd-fstab-generator[1133]: Ignoring "noauto" for root device
	[  +0.199434] systemd-fstab-generator[1146]: Ignoring "noauto" for root device
	[  +1.709069] systemd-fstab-generator[1293]: Ignoring "noauto" for root device
	[  +0.185143] systemd-fstab-generator[1304]: Ignoring "noauto" for root device
	[  +0.184794] systemd-fstab-generator[1315]: Ignoring "noauto" for root device
	[  +0.178472] systemd-fstab-generator[1326]: Ignoring "noauto" for root device
	[  +6.410789] systemd-fstab-generator[1572]: Ignoring "noauto" for root device
	[  +0.907289] kauditd_printk_skb: 68 callbacks suppressed
	[ +15.082353] systemd-fstab-generator[2460]: Ignoring "noauto" for root device
	[ +15.612353] kauditd_printk_skb: 8 callbacks suppressed
	[Feb19 04:43] systemd-fstab-generator[4641]: Ignoring "noauto" for root device
	[  +0.491028] systemd-fstab-generator[4673]: Ignoring "noauto" for root device
	[  +0.221651] systemd-fstab-generator[4684]: Ignoring "noauto" for root device
	[  +0.279987] systemd-fstab-generator[4704]: Ignoring "noauto" for root device
	[  +5.274354] kauditd_printk_skb: 21 callbacks suppressed
	[Feb19 04:44] systemd-fstab-generator[5952]: Ignoring "noauto" for root device
	[  +0.271137] systemd-fstab-generator[5989]: Ignoring "noauto" for root device
	[  +0.263398] systemd-fstab-generator[6044]: Ignoring "noauto" for root device
	[  +0.261800] systemd-fstab-generator[6070]: Ignoring "noauto" for root device
	[  +6.269630] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.734267] kauditd_printk_skb: 6 callbacks suppressed
	[ +19.349897] systemd-fstab-generator[8075]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [80b34d8effdc] <==
	* {"level":"info","ts":"2023-02-19T04:44:40.845Z","caller":"traceutil/trace.go:171","msg":"trace[166448543] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:500; }","duration":"362.027172ms","start":"2023-02-19T04:44:40.483Z","end":"2023-02-19T04:44:40.845Z","steps":["trace[166448543] 'read index received'  (duration: 203.283154ms)","trace[166448543] 'applied index is now lower than readState.Index'  (duration: 158.742618ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.181Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"231.618861ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7521159158890045686 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520330e30183d\" mod_revision:449 > success:<request_put:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520330e30183d\" value_size:584 lease:7521159158890045563 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520330e30183d\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:44:41.181Z","caller":"traceutil/trace.go:171","msg":"trace[1886732854] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"243.408818ms","start":"2023-02-19T04:44:40.938Z","end":"2023-02-19T04:44:41.181Z","steps":["trace[1886732854] 'read index received'  (duration: 11.528458ms)","trace[1886732854] 'applied index is now lower than readState.Index'  (duration: 231.87936ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.181Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"271.107316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:node-controller\" ","response":"range_response_count:1 size:835"}
	{"level":"info","ts":"2023-02-19T04:44:41.181Z","caller":"traceutil/trace.go:171","msg":"trace[1281564352] range","detail":"{range_begin:/registry/clusterroles/system:controller:node-controller; range_end:; response_count:1; response_revision:457; }","duration":"271.129816ms","start":"2023-02-19T04:44:40.910Z","end":"2023-02-19T04:44:41.181Z","steps":["trace[1281564352] 'agreement among raft nodes before linearized reading'  (duration: 270.965317ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:44:41.182Z","caller":"traceutil/trace.go:171","msg":"trace[1107592853] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"272.572311ms","start":"2023-02-19T04:44:40.909Z","end":"2023-02-19T04:44:41.182Z","steps":["trace[1107592853] 'process raft request'  (duration: 40.188953ms)","trace[1107592853] 'compare'  (duration: 229.01347ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"473.156692ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7521159158890045690 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" value_size:890 lease:7521159158890045563 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[681197953] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:506; }","duration":"603.88702ms","start":"2023-02-19T04:44:41.194Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[681197953] 'read index received'  (duration: 130.462829ms)","trace[681197953] 'applied index is now lower than readState.Index'  (duration: 473.422491ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[2047180931] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"604.285418ms","start":"2023-02-19T04:44:41.194Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[2047180931] 'process raft request'  (duration: 130.842028ms)","trace[2047180931] 'compare'  (duration: 472.822793ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:44:41.194Z","time spent":"604.353318ms","remote":"127.0.0.1:43210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":978,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" value_size:890 lease:7521159158890045563 >> failure:<>"}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"494.181316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"604.507317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:pod-garbage-collector\" ","response":"range_response_count:1 size:663"}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[1097732361] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:458; }","duration":"494.254915ms","start":"2023-02-19T04:44:41.304Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[1097732361] 'agreement among raft nodes before linearized reading'  (duration: 494.127516ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[650200663] range","detail":"{range_begin:/registry/clusterroles/system:controller:pod-garbage-collector; range_end:; response_count:1; response_revision:458; }","duration":"604.538317ms","start":"2023-02-19T04:44:41.194Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[650200663] 'agreement among raft nodes before linearized reading'  (duration: 604.303518ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:44:41.304Z","time spent":"494.314315ms","remote":"127.0.0.1:43186","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:44:41.194Z","time spent":"604.579617ms","remote":"127.0.0.1:43272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":686,"request content":"key:\"/registry/clusterroles/system:controller:pod-garbage-collector\" "}
	{"level":"info","ts":"2023-02-19T04:44:41.965Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-19T04:44:41.965Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-061400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.246.210:2380"],"advertise-client-urls":["https://172.28.246.210:2379"]}
	{"level":"warn","ts":"2023-02-19T04:44:42.087Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.522742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:replication-controller\" ","response":"","error":"rangeKeys: context cancelled: context canceled"}
	{"level":"info","ts":"2023-02-19T04:44:42.087Z","caller":"traceutil/trace.go:171","msg":"trace[1152573996] range","detail":"{range_begin:/registry/clusterroles/system:controller:replication-controller; range_end:; }","duration":"183.611742ms","start":"2023-02-19T04:44:41.903Z","end":"2023-02-19T04:44:42.087Z","steps":["trace[1152573996] 'range keys from in-memory index tree'  (duration: 183.424543ms)"],"step_count":1}
	WARNING: 2023/02/19 04:44:42 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-02-19T04:44:42.103Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f3a8ca31a8b26860","current-leader-member-id":"f3a8ca31a8b26860"}
	{"level":"info","ts":"2023-02-19T04:44:42.281Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:42.283Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:42.283Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-061400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.246.210:2380"],"advertise-client-urls":["https://172.28.246.210:2379"]}
	
	* 
	* ==> etcd [f6fd40251a2c] <==
	* {"level":"info","ts":"2023-02-19T04:44:51.204Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f3a8ca31a8b26860","initial-advertise-peer-urls":["https://172.28.246.210:2380"],"listen-peer-urls":["https://172.28.246.210:2380"],"advertise-client-urls":["https://172.28.246.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.246.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 is starting a new election at term 4"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 received MsgPreVoteResp from f3a8ca31a8b26860 at term 4"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 became candidate at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 received MsgVoteResp from f3a8ca31a8b26860 at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 became leader at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f3a8ca31a8b26860 elected leader f3a8ca31a8b26860 at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.121Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"f3a8ca31a8b26860","local-member-attributes":"{Name:pause-061400 ClientURLs:[https://172.28.246.210:2379]}","request-path":"/0/members/f3a8ca31a8b26860/attributes","cluster-id":"5f814601d2eff1a5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-19T04:44:52.121Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-19T04:44:52.123Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-19T04:44:52.123Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-19T04:44:52.124Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-19T04:44:52.124Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-19T04:44:52.141Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.28.246.210:2379"}
	{"level":"info","ts":"2023-02-19T04:45:15.378Z","caller":"traceutil/trace.go:171","msg":"trace[544152551] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"166.54475ms","start":"2023-02-19T04:45:15.212Z","end":"2023-02-19T04:45:15.378Z","steps":["trace[544152551] 'process raft request'  (duration: 166.231051ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:45:15.738Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"147.653989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7521159158896142680 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-061400\" mod_revision:520 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-061400\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-061400\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:45:15.738Z","caller":"traceutil/trace.go:171","msg":"trace[26628075] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"320.918225ms","start":"2023-02-19T04:45:15.417Z","end":"2023-02-19T04:45:15.738Z","steps":["trace[26628075] 'process raft request'  (duration: 172.192638ms)","trace[26628075] 'compare'  (duration: 147.31369ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:45:15.738Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:45:15.417Z","time spent":"321.200925ms","remote":"127.0.0.1:43986","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-061400\" mod_revision:520 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-061400\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-061400\" > >"}
	{"level":"warn","ts":"2023-02-19T04:45:16.104Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.394455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2023-02-19T04:45:16.104Z","caller":"traceutil/trace.go:171","msg":"trace[832395901] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:536; }","duration":"118.947254ms","start":"2023-02-19T04:45:15.985Z","end":"2023-02-19T04:45:16.104Z","steps":["trace[832395901] 'range keys from in-memory index tree'  (duration: 118.030255ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:45:23 up 5 min,  0 users,  load average: 1.70, 1.01, 0.45
	Linux pause-061400 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2b3c01bd2cb0] <==
	* I0219 04:44:54.534398       1 establishing_controller.go:76] Starting EstablishingController
	I0219 04:44:54.534476       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0219 04:44:54.534506       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0219 04:44:54.534520       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0219 04:44:54.523577       1 autoregister_controller.go:141] Starting autoregister controller
	I0219 04:44:54.587496       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0219 04:44:54.692317       1 cache.go:39] Caches are synced for autoregister controller
	I0219 04:44:54.726753       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0219 04:44:54.727595       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0219 04:44:54.728503       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0219 04:44:54.728737       1 shared_informer.go:280] Caches are synced for configmaps
	I0219 04:44:54.729008       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0219 04:44:54.729153       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0219 04:44:54.729487       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0219 04:44:54.743658       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0219 04:44:54.767417       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0219 04:44:55.140185       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0219 04:44:55.535669       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0219 04:44:56.347057       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0219 04:44:56.388131       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0219 04:44:56.456835       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0219 04:44:56.541826       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0219 04:44:56.581082       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0219 04:45:07.245454       1 controller.go:615] quota admission added evaluator for: endpoints
	I0219 04:45:07.258392       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [c68aa6c91f3f] <==
	* W0219 04:44:42.996372       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0219 04:44:42.996407       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0219 04:44:42.996452       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0219 04:44:43.200574       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [9a8d52471ec0] <==
	* I0219 04:44:26.309688       1 serving.go:348] Generated self-signed cert in-memory
	I0219 04:44:27.391352       1 controllermanager.go:182] Version: v1.26.1
	I0219 04:44:27.391408       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:27.393674       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0219 04:44:27.394852       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0219 04:44:27.394974       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0219 04:44:27.395071       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [f1ea4dbac71c] <==
	* I0219 04:45:07.181192       1 shared_informer.go:280] Caches are synced for certificate-csrapproving
	I0219 04:45:07.181486       1 shared_informer.go:280] Caches are synced for deployment
	I0219 04:45:07.182702       1 shared_informer.go:280] Caches are synced for taint
	I0219 04:45:07.182996       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0219 04:45:07.183215       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-061400. Assuming now as a timestamp.
	I0219 04:45:07.183424       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0219 04:45:07.184104       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0219 04:45:07.184394       1 taint_manager.go:211] "Sending events to api server"
	I0219 04:45:07.184795       1 event.go:294] "Event occurred" object="pause-061400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-061400 event: Registered Node pause-061400 in Controller"
	I0219 04:45:07.185129       1 shared_informer.go:280] Caches are synced for ephemeral
	I0219 04:45:07.197214       1 shared_informer.go:280] Caches are synced for crt configmap
	I0219 04:45:07.200538       1 shared_informer.go:280] Caches are synced for endpoint
	I0219 04:45:07.205416       1 shared_informer.go:280] Caches are synced for stateful set
	I0219 04:45:07.220819       1 shared_informer.go:280] Caches are synced for disruption
	I0219 04:45:07.222408       1 shared_informer.go:280] Caches are synced for PV protection
	I0219 04:45:07.226391       1 shared_informer.go:280] Caches are synced for namespace
	I0219 04:45:07.229297       1 shared_informer.go:280] Caches are synced for job
	I0219 04:45:07.230776       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0219 04:45:07.270787       1 shared_informer.go:280] Caches are synced for cronjob
	I0219 04:45:07.307470       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0219 04:45:07.310218       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:45:07.335851       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:45:07.771179       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:45:07.801900       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:45:07.805194       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [c80aca6e1a30] <==
	* I0219 04:44:32.700126       1 node.go:163] Successfully retrieved node IP: 172.28.246.210
	I0219 04:44:32.717869       1 server_others.go:109] "Detected node IP" address="172.28.246.210"
	I0219 04:44:32.718149       1 server_others.go:535] "Using iptables proxy"
	I0219 04:44:32.824814       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:44:32.824848       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:44:32.824893       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:44:32.826738       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:44:32.826763       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:32.827894       1 config.go:317] "Starting service config controller"
	I0219 04:44:32.828114       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:44:32.828160       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:44:32.828171       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:44:32.832968       1 config.go:444] "Starting node config controller"
	I0219 04:44:32.833426       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:44:32.929309       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:44:32.929397       1 shared_informer.go:280] Caches are synced for service config
	I0219 04:44:32.934166       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [eea008990fc5] <==
	* I0219 04:44:56.557703       1 node.go:163] Successfully retrieved node IP: 172.28.246.210
	I0219 04:44:56.560173       1 server_others.go:109] "Detected node IP" address="172.28.246.210"
	I0219 04:44:56.560207       1 server_others.go:535] "Using iptables proxy"
	I0219 04:44:56.630773       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:44:56.630918       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:44:56.630962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:44:56.632036       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:44:56.632179       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:56.634073       1 config.go:317] "Starting service config controller"
	I0219 04:44:56.634754       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:44:56.635115       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:44:56.641286       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:44:56.641325       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:44:56.635412       1 config.go:444] "Starting node config controller"
	I0219 04:44:56.641349       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:44:56.641357       1 shared_informer.go:280] Caches are synced for node config
	I0219 04:44:56.736375       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [175333ff2e04] <==
	* I0219 04:44:51.505158       1 serving.go:348] Generated self-signed cert in-memory
	W0219 04:44:54.609312       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0219 04:44:54.609598       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0219 04:44:54.609825       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0219 04:44:54.610081       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0219 04:44:54.689915       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0219 04:44:54.690186       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:54.697476       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0219 04:44:54.697955       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:44:54.702339       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0219 04:44:54.704747       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0219 04:44:54.799723       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fa50d192f649] <==
	* I0219 04:44:33.418639       1 serving.go:348] Generated self-signed cert in-memory
	I0219 04:44:34.119951       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0219 04:44:34.120055       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:34.556044       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0219 04:44:34.556141       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0219 04:44:34.556154       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0219 04:44:34.556175       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0219 04:44:34.570896       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0219 04:44:34.570940       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:44:34.570967       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0219 04:44:34.575146       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0219 04:44:34.660454       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0219 04:44:34.671981       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:44:34.676168       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0219 04:44:42.112564       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0219 04:44:42.112792       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-02-19 04:40:04 UTC, ends at Sun 2023-02-19 04:45:24 UTC. --
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523153    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5cf53c7218b7fd176d54da72155dd87-kubeconfig\") pod \"kube-scheduler-pause-061400\" (UID: \"f5cf53c7218b7fd176d54da72155dd87\") " pod="kube-system/kube-scheduler-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523190    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff31bf68418bde452fbbe0538f99857f-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-061400\" (UID: \"ff31bf68418bde452fbbe0538f99857f\") " pod="kube-system/kube-controller-manager-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523287    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff31bf68418bde452fbbe0538f99857f-flexvolume-dir\") pod \"kube-controller-manager-pause-061400\" (UID: \"ff31bf68418bde452fbbe0538f99857f\") " pod="kube-system/kube-controller-manager-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523321    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff31bf68418bde452fbbe0538f99857f-ca-certs\") pod \"kube-controller-manager-pause-061400\" (UID: \"ff31bf68418bde452fbbe0538f99857f\") " pod="kube-system/kube-controller-manager-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.761597    8081 scope.go:115] "RemoveContainer" containerID="80b34d8effdc53dce2993fe2eb94ea4e0f03afd2698b5acb770ce74b0f89fc6b"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.788749    8081 scope.go:115] "RemoveContainer" containerID="9a8d52471ec08e832ccf7f49afd53644e9754a3e6a868534ba87878254edadf8"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.801113    8081 scope.go:115] "RemoveContainer" containerID="fa50d192f649422c09a8d323bc4091750b765119cbfe8a9b8606ea1f6351f702"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.747846    8081 kubelet_node_status.go:108] "Node was previously registered" node="pause-061400"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.748008    8081 kubelet_node_status.go:73] "Successfully registered node" node="pause-061400"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.751753    8081 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.753364    8081 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.965098    8081 apiserver.go:52] "Watching apiserver"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.968064    8081 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.968331    8081 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.005196    8081 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072808    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305ace80-8a26-4015-a001-8b39b2b2a3ec-config-volume\") pod \"coredns-787d4945fb-mjptj\" (UID: \"305ace80-8a26-4015-a001-8b39b2b2a3ec\") " pod="kube-system/coredns-787d4945fb-mjptj"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072894    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86vw6\" (UniqueName: \"kubernetes.io/projected/305ace80-8a26-4015-a001-8b39b2b2a3ec-kube-api-access-86vw6\") pod \"coredns-787d4945fb-mjptj\" (UID: \"305ace80-8a26-4015-a001-8b39b2b2a3ec\") " pod="kube-system/coredns-787d4945fb-mjptj"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072931    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df76445b-2fa1-405c-9cd6-46a18b28ef95-kube-proxy\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072957    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df76445b-2fa1-405c-9cd6-46a18b28ef95-xtables-lock\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072983    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df76445b-2fa1-405c-9cd6-46a18b28ef95-lib-modules\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.073043    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslrv\" (UniqueName: \"kubernetes.io/projected/df76445b-2fa1-405c-9cd6-46a18b28ef95-kube-api-access-jslrv\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.073060    8081 reconciler.go:41] "Reconciler: start to sync state"
	Feb 19 04:44:56 pause-061400 kubelet[8081]: I0219 04:44:56.169421    8081 scope.go:115] "RemoveContainer" containerID="c80aca6e1a30ae7bb7e28f355b8c0ce0351a8d9ce6ed85d3fd3a1c1648dbd60d"
	Feb 19 04:44:56 pause-061400 kubelet[8081]: I0219 04:44:56.262564    8081 request.go:690] Waited for 1.087518469s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token
	Feb 19 04:44:57 pause-061400 kubelet[8081]: I0219 04:44:57.542370    8081 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac745449796840c61516af32333a6452ed4f81ef04f4bd42071ef043d237531"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061400 -n pause-061400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061400 -n pause-061400: (5.5623667s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-061400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061400 -n pause-061400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061400 -n pause-061400: (4.9880292s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-061400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-061400 logs -n 25: (4.9138757s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-843300 sudo cat              | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /lib/systemd/system/containerd.service |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo cat              | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/containerd/config.toml            |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                  | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | containerd config dump                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                  | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl status crio --all            |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo                  | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | systemctl cat crio --no-pager          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo find             | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |                   |         |                     |                     |
	| ssh     | -p cilium-843300 sudo crio             | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT |                     |
	|         | config                                 |                           |                   |         |                     |                     |
	| delete  | -p cilium-843300                       | cilium-843300             | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT | 19 Feb 23 04:33 GMT |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:33 GMT | 19 Feb 23 04:39 GMT |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-928900              | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:35 GMT |
	|         | ssh docker info --format               |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                      |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-928900           | force-systemd-flag-928900 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:35 GMT | 19 Feb 23 04:36 GMT |
	| delete  | -p offline-docker-928900               | offline-docker-928900     | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:36 GMT | 19 Feb 23 04:37 GMT |
	| delete  | -p NoKubernetes-928900                 | NoKubernetes-928900       | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:38 GMT | 19 Feb 23 04:39 GMT |
	| start   | -p pause-061400 --memory=2048          | pause-061400              | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:39 GMT | 19 Feb 23 04:41 GMT |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv             |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:39 GMT | 19 Feb 23 04:40 GMT |
	| start   | -p stopped-upgrade-608000              | stopped-upgrade-608000    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:39 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:40 GMT | 19 Feb 23 04:43 GMT |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.1           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-940200              | running-upgrade-940200    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:41 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p pause-061400                        | pause-061400              | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:41 GMT | 19 Feb 23 04:45 GMT |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-608000              | stopped-upgrade-608000    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:42 GMT | 19 Feb 23 04:42 GMT |
	| start   | -p cert-expiration-011800              | cert-expiration-011800    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:42 GMT |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:43 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-803700           | kubernetes-upgrade-803700 | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:43 GMT |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.1           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-940200              | running-upgrade-940200    | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:44 GMT | 19 Feb 23 04:44 GMT |
	| start   | -p docker-flags-045000                 | docker-flags-045000       | minikube1\jenkins | v1.29.0 | 19 Feb 23 04:44 GMT |                     |
	|         | --cache-images=false                   |                           |                   |         |                     |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=false                           |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                   |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                     |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 04:44:44
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 04:44:44.109149    3548 out.go:296] Setting OutFile to fd 1672 ...
	I0219 04:44:44.177479    3548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:44:44.177479    3548 out.go:309] Setting ErrFile to fd 1760...
	I0219 04:44:44.177479    3548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:44:44.205566    3548 out.go:303] Setting JSON to false
	I0219 04:44:44.209017    3548 start.go:125] hostinfo: {"hostname":"minikube1","uptime":18873,"bootTime":1676763010,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 04:44:44.210007    3548 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 04:44:44.223270    3548 out.go:177] * [docker-flags-045000] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 04:44:44.231122    3548 notify.go:220] Checking for updates...
	I0219 04:44:44.238080    3548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:44:44.253070    3548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 04:44:44.258069    3548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 04:44:44.263068    3548 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 04:44:44.273094    3548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 04:44:41.720207   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-011800
	
	I0219 04:44:41.720207   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:42.514034   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:42.514234   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:42.514234   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:43.643724   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:43.643920   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:43.648244   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:43.649015   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:43.649015   10072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-011800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-011800/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-011800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:44:43.824966   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:44:43.824966   10072 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:44:43.824966   10072 buildroot.go:174] setting up certificates
	I0219 04:44:43.824966   10072 provision.go:83] configureAuth start
	I0219 04:44:43.824966   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:44.668810   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:44.668810   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:44.668810   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:45.779351   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:45.779396   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:45.779396   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:44.279087    3548 config.go:182] Loaded profile config "cert-expiration-011800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:44.280094    3548 config.go:182] Loaded profile config "kubernetes-upgrade-803700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:44.280094    3548 config.go:182] Loaded profile config "pause-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:44.280094    3548 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 04:44:46.035036    3548 out.go:177] * Using the hyperv driver based on user configuration
	I0219 04:44:46.036846    3548 start.go:296] selected driver: hyperv
	I0219 04:44:46.040211    3548 start.go:857] validating driver "hyperv" against <nil>
	I0219 04:44:46.040211    3548 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 04:44:46.090546    3548 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0219 04:44:46.091547    3548 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0219 04:44:46.091547    3548 cni.go:84] Creating CNI manager for ""
	I0219 04:44:46.091547    3548 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:44:46.091547    3548 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0219 04:44:46.091547    3548 start_flags.go:319] config:
	{Name:docker-flags-045000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:docker-flags-045000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:44:46.091547    3548 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 04:44:46.095338    3548 out.go:177] * Starting control plane node docker-flags-045000 in cluster docker-flags-045000
	I0219 04:44:47.213662   11108 ssh_runner.go:235] Completed: docker stop 7e146eb9b748 fa50d192f649 a0afef71105f c68aa6c91f3f c80aca6e1a30 80b34d8effdc 9a8d52471ec0 20ba1b6e4482 864e078083a6 15064c3c6813 6da63f0b880e 238e4cc4e697 cb12b8aae594 deb8e4482cca 66f405c5eb82 7c6bf0b18332 bb119d3f422a b6699319530d 5864b4e9f767 a73c696cbf80 d9f886c8b3f1 3cdf0f7fc6f8 aecde6be832f 575d786445b7: (6.0295823s)
	I0219 04:44:47.223763   11108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0219 04:44:47.282009   11108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:44:47.298150   11108 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 19 04:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Feb 19 04:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 19 04:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Feb 19 04:41 /etc/kubernetes/scheduler.conf
	
	I0219 04:44:47.307607   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0219 04:44:47.338086   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0219 04:44:47.363446   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0219 04:44:46.100482    3548 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:44:46.100807    3548 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0219 04:44:46.100807    3548 cache.go:57] Caching tarball of preloaded images
	I0219 04:44:46.100965    3548 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0219 04:44:46.100965    3548 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0219 04:44:46.101507    3548 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\docker-flags-045000\config.json ...
	I0219 04:44:46.101772    3548 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\docker-flags-045000\config.json: {Name:mkadbdd8f4461eea2d4941f45670077824768adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:44:46.101772    3548 cache.go:193] Successfully downloaded all kic artifacts
	I0219 04:44:46.101772    3548 start.go:364] acquiring machines lock for docker-flags-045000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0219 04:44:46.544622   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:46.544622   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:46.544848   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:47.602216   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:47.602216   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:47.602216   10072 provision.go:138] copyHostCerts
	I0219 04:44:47.602216   10072 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:44:47.602216   10072 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:44:47.602216   10072 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:44:47.604493   10072 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:44:47.604493   10072 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:44:47.604858   10072 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:44:47.605853   10072 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:44:47.605853   10072 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:44:47.606222   10072 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:44:47.607483   10072 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-011800 san=[172.28.248.128 172.28.248.128 localhost 127.0.0.1 minikube cert-expiration-011800]
	I0219 04:44:47.965443   10072 provision.go:172] copyRemoteCerts
	I0219 04:44:47.974446   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:44:47.974446   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:48.735462   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:48.735462   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:48.735538   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:49.811784   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:49.811830   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:49.812006   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:44:49.956222   10072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9817832s)
	I0219 04:44:49.956713   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:44:50.011153   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0219 04:44:50.054662   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 04:44:50.098679   10072 provision.go:86] duration metric: configureAuth took 6.2737353s
	I0219 04:44:50.098679   10072 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:44:50.099630   10072 config.go:182] Loaded profile config "cert-expiration-011800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:44:50.099630   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:50.858158   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:50.858158   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:50.858361   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:47.378745   11108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:47.389393   11108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0219 04:44:47.413775   11108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0219 04:44:47.427857   11108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0219 04:44:47.436362   11108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0219 04:44:47.459844   11108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:44:47.476446   11108 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0219 04:44:47.476497   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:47.572429   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.433259   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.776226   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.873899   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:48.995346   11108 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:44:49.006164   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:44:49.029409   11108 api_server.go:71] duration metric: took 34.0629ms to wait for apiserver process to appear ...
	I0219 04:44:49.029409   11108 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:44:49.029409   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:51.893188   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:51.893244   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:51.897352   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:51.898094   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:51.898094   10072 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:44:52.061273   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:44:52.061273   10072 buildroot.go:70] root file system type: tmpfs
	I0219 04:44:52.061456   10072 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:44:52.061524   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:52.812618   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:52.812666   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:52.812698   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:53.866667   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:53.866667   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:53.869522   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:53.870523   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:53.870523   10072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:44:54.061545   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:44:54.061545   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:54.822314   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:54.822314   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:54.822480   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:55.866514   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:55.866718   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:55.871698   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:44:55.872594   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:44:55.872594   10072 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:44:54.041283   11108 api_server.go:268] stopped: https://172.28.246.210:8443/healthz: Get "https://172.28.246.210:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0219 04:44:54.552864   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:54.649711   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0219 04:44:54.649711   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0219 04:44:55.044032   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:55.054447   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:44:55.054447   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:55.549821   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:55.564375   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0219 04:44:55.564375   11108 api_server.go:102] status: https://172.28.246.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0219 04:44:56.053301   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:44:56.061964   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 200:
	ok
	I0219 04:44:56.081908   11108 api_server.go:140] control plane version: v1.26.1
	I0219 04:44:56.082027   11108 api_server.go:130] duration metric: took 7.0526432s to wait for apiserver health ...
	I0219 04:44:56.082165   11108 cni.go:84] Creating CNI manager for ""
	I0219 04:44:56.082165   11108 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:44:56.085258   11108 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0219 04:44:56.096851   11108 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0219 04:44:56.113341   11108 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0219 04:44:56.144521   11108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:44:56.159074   11108 system_pods.go:59] 6 kube-system pods found
	I0219 04:44:56.159140   11108 system_pods.go:61] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:44:56.159169   11108 system_pods.go:61] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:44:56.159169   11108 system_pods.go:61] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0219 04:44:56.159169   11108 system_pods.go:61] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0219 04:44:56.159214   11108 system_pods.go:61] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0219 04:44:56.159214   11108 system_pods.go:61] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0219 04:44:56.159214   11108 system_pods.go:74] duration metric: took 14.6932ms to wait for pod list to return data ...
	I0219 04:44:56.159244   11108 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:44:56.172932   11108 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:44:56.173105   11108 node_conditions.go:123] node cpu capacity is 2
	I0219 04:44:56.173105   11108 node_conditions.go:105] duration metric: took 13.861ms to run NodePressure ...
	I0219 04:44:56.173105   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0219 04:44:56.697644   11108 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0219 04:44:56.710678   11108 kubeadm.go:784] kubelet initialised
	I0219 04:44:56.710723   11108 kubeadm.go:785] duration metric: took 13.0798ms waiting for restarted kubelet to initialise ...
	I0219 04:44:56.710751   11108 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:44:56.725013   11108 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:56.759805   11108 pod_ready.go:92] pod "coredns-787d4945fb-mjptj" in "kube-system" namespace has status "Ready":"True"
	I0219 04:44:56.759805   11108 pod_ready.go:81] duration metric: took 34.7536ms waiting for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:56.759906   11108 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:44:57.130646   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0219 04:44:57.130646   10072 machine.go:91] provisioned docker machine in 20.5168582s
	I0219 04:44:57.130646   10072 client.go:171] LocalClient.Create took 1m7.8090574s
	I0219 04:44:57.130646   10072 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-011800" took 1m7.8090574s
	I0219 04:44:57.130646   10072 start.go:300] post-start starting for "cert-expiration-011800" (driver="hyperv")
	I0219 04:44:57.130646   10072 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:44:57.141774   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:44:57.141774   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:57.902065   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:57.902065   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:57.902065   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:44:58.942449   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:44:58.942449   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:58.943057   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:44:59.050005   10072 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9081345s)
	I0219 04:44:59.060688   10072 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:44:59.069091   10072 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:44:59.069091   10072 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:44:59.069435   10072 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:44:59.070481   10072 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:44:59.081690   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:44:59.098418   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:44:59.152917   10072 start.go:303] post-start completed in 2.0222775s
	I0219 04:44:59.155958   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:59.883272   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:44:59.883342   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:44:59.883342   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:00.911035   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:00.911035   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:00.911418   10072 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\config.json ...
	I0219 04:45:00.914717   10072 start.go:128] duration metric: createHost completed in 1m11.5964913s
	I0219 04:45:00.914776   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:44:58.810430   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:01.302021   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:02.878735    4660 start.go:368] acquired machines lock for "kubernetes-upgrade-803700" in 1m18.9382132s
	I0219 04:45:02.879019    4660 start.go:96] Skipping create...Using existing machine configuration
	I0219 04:45:02.879019    4660 fix.go:55] fixHost starting: 
	I0219 04:45:02.879758    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:03.648880    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:03.649149    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:03.649149    4660 fix.go:103] recreateIfNeeded on kubernetes-upgrade-803700: state=Running err=<nil>
	W0219 04:45:03.649149    4660 fix.go:129] unexpected machine state, will restart: <nil>
	I0219 04:45:03.655474    4660 out.go:177] * Updating the running hyperv "kubernetes-upgrade-803700" VM ...
	I0219 04:45:01.655129   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:01.655129   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:01.655511   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:02.714243   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:02.714243   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:02.718947   10072 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:02.719761   10072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.248.128 22 <nil> <nil>}
	I0219 04:45:02.719761   10072 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:45:02.878400   10072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781902.871976388
	
	I0219 04:45:02.878400   10072 fix.go:207] guest clock: 1676781902.871976388
	I0219 04:45:02.878400   10072 fix.go:220] Guest: 2023-02-19 04:45:02.871976388 +0000 GMT Remote: 2023-02-19 04:45:00.9147176 +0000 GMT m=+155.178942301 (delta=1.957258788s)
	I0219 04:45:02.878400   10072 fix.go:191] guest clock delta is within tolerance: 1.957258788s
	I0219 04:45:02.878495   10072 start.go:83] releasing machines lock for "cert-expiration-011800", held for 1m13.5613659s
	I0219 04:45:02.878632   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:45:03.664974   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:03.664974   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:03.664974   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:04.725489   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:04.725489   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:04.729022   10072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:45:04.729022   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:45:04.736633   10072 ssh_runner.go:195] Run: cat /version.json
	I0219 04:45:04.736633   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-011800 ).state
	I0219 04:45:05.522128   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:05.522316   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:05.522316   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:05.522818   10072 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:05.522894   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:05.522894   10072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-011800 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:03.657933    4660 machine.go:88] provisioning docker machine ...
	I0219 04:45:03.657933    4660 buildroot.go:166] provisioning hostname "kubernetes-upgrade-803700"
	I0219 04:45:03.657933    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:04.427124    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:04.427453    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:04.427523    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:05.522128    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:05.522316    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:05.525837    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:05.527224    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:05.527275    4660 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-803700 && echo "kubernetes-upgrade-803700" | sudo tee /etc/hostname
	I0219 04:45:05.720122    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803700
	
	I0219 04:45:05.720197    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:06.472623    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:06.472623    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:06.472737    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:03.307911   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:05.308148   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:07.309416   11108 pod_ready.go:102] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"False"
	I0219 04:45:07.800023   11108 pod_ready.go:92] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.800023   11108 pod_ready.go:81] duration metric: took 11.0401552s waiting for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.800023   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.815009   11108 pod_ready.go:92] pod "kube-apiserver-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.815009   11108 pod_ready.go:81] duration metric: took 14.9857ms waiting for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.815009   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.824030   11108 pod_ready.go:92] pod "kube-controller-manager-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.824030   11108 pod_ready.go:81] duration metric: took 9.0219ms waiting for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.824030   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.835012   11108 pod_ready.go:92] pod "kube-proxy-mgb72" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.835012   11108 pod_ready.go:81] duration metric: took 10.9817ms waiting for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.835012   11108 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.843013   11108 pod_ready.go:92] pod "kube-scheduler-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:07.844056   11108 pod_ready.go:81] duration metric: took 9.0436ms waiting for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:07.844056   11108 pod_ready.go:38] duration metric: took 11.1333434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:07.844056   11108 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0219 04:45:07.865633   11108 ops.go:34] apiserver oom_adj: -16
	I0219 04:45:07.865633   11108 kubeadm.go:637] restartCluster took 49.7468737s
	I0219 04:45:07.865633   11108 kubeadm.go:403] StartCluster complete in 49.8293001s
	I0219 04:45:07.865633   11108 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:07.865633   11108 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 04:45:07.867640   11108 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:07.869661   11108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0219 04:45:07.869661   11108 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0219 04:45:07.873626   11108 out.go:177] * Enabled addons: 
	I0219 04:45:07.869661   11108 config.go:182] Loaded profile config "pause-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:45:07.875636   11108 addons.go:492] enable addons completed in 5.9748ms: enabled=[]
	I0219 04:45:07.878671   11108 kapi.go:59] client config for pause-061400: &rest.Config{Host:"https://172.28.246.210:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e83e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0219 04:45:07.885640   11108 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-061400" context rescaled to 1 replicas
	I0219 04:45:07.885640   11108 start.go:223] Will wait 6m0s for node &{Name: IP:172.28.246.210 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:45:07.887626   11108 out.go:177] * Verifying Kubernetes components...
	I0219 04:45:06.664528   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:06.664528   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:06.664965   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:45:06.685119   10072 main.go:141] libmachine: [stdout =====>] : 172.28.248.128
	
	I0219 04:45:06.685119   10072 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:06.685119   10072 sshutil.go:53] new ssh client: &{IP:172.28.248.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-011800\id_rsa Username:docker}
	I0219 04:45:06.883984   10072 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1549697s)
	I0219 04:45:06.884108   10072 ssh_runner.go:235] Completed: cat /version.json: (2.1474191s)
	I0219 04:45:06.894904   10072 ssh_runner.go:195] Run: systemctl --version
	I0219 04:45:06.912828   10072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:45:06.921225   10072 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:45:06.931446   10072 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:45:06.948787   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:45:06.965033   10072 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:45:07.004765   10072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0219 04:45:07.031165   10072 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:45:07.031165   10072 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:45:07.040090   10072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:45:07.074978   10072 docker.go:630] Got preloaded images: 
	I0219 04:45:07.074978   10072 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0219 04:45:07.084555   10072 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:45:07.113541   10072 ssh_runner.go:195] Run: which lz4
	I0219 04:45:07.128546   10072 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0219 04:45:07.133556   10072 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0219 04:45:07.134554   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0219 04:45:09.535999   10072 docker.go:594] Took 2.416131 seconds to copy over tarball
	I0219 04:45:09.546517   10072 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0219 04:45:07.901637   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:45:08.040097   11108 node_ready.go:35] waiting up to 6m0s for node "pause-061400" to be "Ready" ...
	I0219 04:45:08.041111   11108 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0219 04:45:08.046088   11108 node_ready.go:49] node "pause-061400" has status "Ready":"True"
	I0219 04:45:08.046088   11108 node_ready.go:38] duration metric: took 4.9772ms waiting for node "pause-061400" to be "Ready" ...
	I0219 04:45:08.047114   11108 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:08.204562   11108 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:08.600526   11108 pod_ready.go:92] pod "coredns-787d4945fb-mjptj" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:08.601519   11108 pod_ready.go:81] duration metric: took 396.9585ms waiting for pod "coredns-787d4945fb-mjptj" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:08.601519   11108 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.001285   11108 pod_ready.go:92] pod "etcd-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.001285   11108 pod_ready.go:81] duration metric: took 399.7671ms waiting for pod "etcd-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.001285   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.411539   11108 pod_ready.go:92] pod "kube-apiserver-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.411539   11108 pod_ready.go:81] duration metric: took 410.2559ms waiting for pod "kube-apiserver-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.411539   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.802176   11108 pod_ready.go:92] pod "kube-controller-manager-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:09.802176   11108 pod_ready.go:81] duration metric: took 390.6384ms waiting for pod "kube-controller-manager-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:09.802176   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.203669   11108 pod_ready.go:92] pod "kube-proxy-mgb72" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:10.203669   11108 pod_ready.go:81] duration metric: took 401.4946ms waiting for pod "kube-proxy-mgb72" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.203736   11108 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.599228   11108 pod_ready.go:92] pod "kube-scheduler-pause-061400" in "kube-system" namespace has status "Ready":"True"
	I0219 04:45:10.599228   11108 pod_ready.go:81] duration metric: took 395.494ms waiting for pod "kube-scheduler-pause-061400" in "kube-system" namespace to be "Ready" ...
	I0219 04:45:10.599228   11108 pod_ready.go:38] duration metric: took 2.5521234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0219 04:45:10.599228   11108 api_server.go:51] waiting for apiserver process to appear ...
	I0219 04:45:10.610148   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:45:10.637038   11108 api_server.go:71] duration metric: took 2.7514082s to wait for apiserver process to appear ...
	I0219 04:45:10.637038   11108 api_server.go:87] waiting for apiserver healthz status ...
	I0219 04:45:10.637038   11108 api_server.go:252] Checking apiserver healthz at https://172.28.246.210:8443/healthz ...
	I0219 04:45:10.647517   11108 api_server.go:278] https://172.28.246.210:8443/healthz returned 200:
	ok
	I0219 04:45:10.649514   11108 api_server.go:140] control plane version: v1.26.1
	I0219 04:45:10.649514   11108 api_server.go:130] duration metric: took 12.4755ms to wait for apiserver health ...
	I0219 04:45:10.649514   11108 system_pods.go:43] waiting for kube-system pods to appear ...
	I0219 04:45:10.811236   11108 system_pods.go:59] 6 kube-system pods found
	I0219 04:45:10.811236   11108 system_pods.go:61] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running
	I0219 04:45:10.811236   11108 system_pods.go:61] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running
	I0219 04:45:10.811236   11108 system_pods.go:74] duration metric: took 161.7234ms to wait for pod list to return data ...
	I0219 04:45:10.811236   11108 default_sa.go:34] waiting for default service account to be created ...
	I0219 04:45:10.997573   11108 default_sa.go:45] found service account: "default"
	I0219 04:45:10.997671   11108 default_sa.go:55] duration metric: took 186.4355ms for default service account to be created ...
	I0219 04:45:10.997671   11108 system_pods.go:116] waiting for k8s-apps to be running ...
	I0219 04:45:11.249598   11108 system_pods.go:86] 6 kube-system pods found
	I0219 04:45:11.249660   11108 system_pods.go:89] "coredns-787d4945fb-mjptj" [305ace80-8a26-4015-a001-8b39b2b2a3ec] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "etcd-pause-061400" [5a7f37ea-cadb-4c05-8b9a-5348add9549c] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-apiserver-pause-061400" [ad50ee06-f8fc-4765-b66b-6cfd393e1fc8] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-controller-manager-pause-061400" [6dbc60d0-4db1-4da3-bb30-987d27afe1fd] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-proxy-mgb72" [df76445b-2fa1-405c-9cd6-46a18b28ef95] Running
	I0219 04:45:11.249660   11108 system_pods.go:89] "kube-scheduler-pause-061400" [74e11769-c7e7-47db-b848-85368297db6e] Running
	I0219 04:45:11.249723   11108 system_pods.go:126] duration metric: took 252.0526ms to wait for k8s-apps to be running ...
	I0219 04:45:11.249756   11108 system_svc.go:44] waiting for kubelet service to be running ....
	I0219 04:45:11.260020   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:45:11.282404   11108 system_svc.go:56] duration metric: took 32.6482ms WaitForService to wait for kubelet.
	I0219 04:45:11.282470   11108 kubeadm.go:578] duration metric: took 3.3968425s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0219 04:45:11.282470   11108 node_conditions.go:102] verifying NodePressure condition ...
	I0219 04:45:11.494654   11108 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0219 04:45:11.494654   11108 node_conditions.go:123] node cpu capacity is 2
	I0219 04:45:11.494654   11108 node_conditions.go:105] duration metric: took 212.1846ms to run NodePressure ...
	I0219 04:45:11.494654   11108 start.go:228] waiting for startup goroutines ...
	I0219 04:45:11.494654   11108 start.go:233] waiting for cluster config update ...
	I0219 04:45:11.494654   11108 start.go:242] writing updated cluster config ...
	I0219 04:45:11.506084   11108 ssh_runner.go:195] Run: rm -f paused
	I0219 04:45:11.716356   11108 start.go:555] kubectl: 1.18.2, cluster: 1.26.1 (minor skew: 8)
	I0219 04:45:11.786610   11108 out.go:177] 
	W0219 04:45:11.938234   11108 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1.
	I0219 04:45:12.039082   11108 out.go:177]   - Want kubectl v1.26.1? Try 'minikube kubectl -- get pods -A'
	I0219 04:45:07.601296    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:07.601296    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:07.607288    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:07.608304    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:07.608304    4660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-803700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-803700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-803700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0219 04:45:07.815009    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:45:07.815009    4660 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0219 04:45:07.815009    4660 buildroot.go:174] setting up certificates
	I0219 04:45:07.815009    4660 provision.go:83] configureAuth start
	I0219 04:45:07.815009    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:08.627521    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:08.627521    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:08.627521    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:09.813004    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:09.813061    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:09.813061    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:10.562150    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:10.562150    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:10.562150    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:11.653169    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:11.653326    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:11.653326    4660 provision.go:138] copyHostCerts
	I0219 04:45:11.653735    4660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0219 04:45:11.653735    4660 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0219 04:45:11.654152    4660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0219 04:45:11.655463    4660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0219 04:45:11.655532    4660 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0219 04:45:11.655923    4660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0219 04:45:11.657162    4660 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0219 04:45:11.657235    4660 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0219 04:45:11.657537    4660 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0219 04:45:11.658627    4660 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-803700 san=[172.28.251.111 172.28.251.111 localhost 127.0.0.1 minikube kubernetes-upgrade-803700]
	I0219 04:45:11.877665    4660 provision.go:172] copyRemoteCerts
	I0219 04:45:11.888508    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0219 04:45:11.888508    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:12.233949   11108 out.go:177] * Done! kubectl is now configured to use "pause-061400" cluster and "default" namespace by default
	I0219 04:45:12.641892    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:12.641928    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:12.641987    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:13.803165    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:13.803165    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:13.803773    4660 sshutil.go:53] new ssh client: &{IP:172.28.251.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\id_rsa Username:docker}
	I0219 04:45:13.917309    4660 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.0288086s)
	I0219 04:45:13.917870    4660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0219 04:45:13.971591    4660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0219 04:45:14.020534    4660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0219 04:45:14.065439    4660 provision.go:86] duration metric: configureAuth took 6.2504525s
	I0219 04:45:14.065439    4660 buildroot.go:189] setting minikube options for container-runtime
	I0219 04:45:14.066113    4660 config.go:182] Loaded profile config "kubernetes-upgrade-803700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:45:14.066113    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:14.911825    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:14.911825    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:14.911825    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:16.022891    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:16.022991    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:16.027596    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:16.028888    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:16.028961    4660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0219 04:45:16.188533    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0219 04:45:16.188533    4660 buildroot.go:70] root file system type: tmpfs
	I0219 04:45:16.189124    4660 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0219 04:45:16.189185    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:16.954593    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:16.954593    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:16.954729    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:18.918712   10072 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.3722276s)
	I0219 04:45:18.918712   10072 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0219 04:45:18.999363   10072 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0219 04:45:19.018306   10072 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0219 04:45:19.057309   10072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:45:19.265914   10072 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:45:18.051066    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:18.051066    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:18.054631    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:18.055644    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:18.055644    4660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0219 04:45:18.224104    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0219 04:45:18.224104    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:18.991339    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:18.991339    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:18.991339    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:20.054343    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:20.054343    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:20.058977    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:20.060247    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:20.060247    4660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0219 04:45:20.217464    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0219 04:45:20.217555    4660 machine.go:91] provisioned docker machine in 16.5596799s
	I0219 04:45:20.217555    4660 start.go:300] post-start starting for "kubernetes-upgrade-803700" (driver="hyperv")
	I0219 04:45:20.217555    4660 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0219 04:45:20.228532    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0219 04:45:20.228532    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:21.016038    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:21.016038    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:21.016038    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:24.365965    3548 start.go:368] acquired machines lock for "docker-flags-045000" in 38.264327s
	I0219 04:45:24.366608    3548 start.go:93] Provisioning new machine with config: &{Name:docker-flags-045000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:roo
t SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:docker-flags-045000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0219 04:45:24.366996    3548 start.go:125] createHost starting for "" (driver="hyperv")
	I0219 04:45:21.776150   10072 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5102443s)
	I0219 04:45:21.776714   10072 start.go:485] detecting cgroup driver to use...
	I0219 04:45:21.776714   10072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:45:21.818692   10072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:45:21.853540   10072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:45:21.871542   10072 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:45:21.880541   10072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:45:21.908014   10072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:45:21.935487   10072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:45:21.964502   10072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:45:21.990097   10072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:45:22.017782   10072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:45:22.045081   10072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:45:22.072396   10072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:45:22.111094   10072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:45:22.302172   10072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:45:22.330425   10072 start.go:485] detecting cgroup driver to use...
	I0219 04:45:22.341906   10072 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:45:22.376565   10072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:45:22.411662   10072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:45:22.451656   10072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:45:22.487524   10072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:45:22.520125   10072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0219 04:45:22.586842   10072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:45:22.609796   10072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:45:22.654157   10072 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:45:22.850548   10072 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:45:23.039494   10072 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:45:23.039494   10072 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:45:23.083184   10072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:45:23.265197   10072 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:45:24.976733   10072 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7115417s)
	I0219 04:45:24.993732   10072 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:45:25.195177   10072 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0219 04:45:25.397201   10072 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0219 04:45:25.601187   10072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:45:25.809345   10072 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0219 04:45:25.846449   10072 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0219 04:45:25.855451   10072 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0219 04:45:25.866356   10072 start.go:553] Will wait 60s for crictl version
	I0219 04:45:25.876385   10072 ssh_runner.go:195] Run: which crictl
	I0219 04:45:25.896340   10072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0219 04:45:26.066583   10072 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0219 04:45:26.075450   10072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:45:26.137955   10072 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0219 04:45:22.119678    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:22.119678    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:22.119678    4660 sshutil.go:53] new ssh client: &{IP:172.28.251.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\id_rsa Username:docker}
	I0219 04:45:22.231494    4660 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0029684s)
	I0219 04:45:22.241070    4660 ssh_runner.go:195] Run: cat /etc/os-release
	I0219 04:45:22.247774    4660 info.go:137] Remote host: Buildroot 2021.02.12
	I0219 04:45:22.247774    4660 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0219 04:45:22.248526    4660 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0219 04:45:22.249809    4660 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem -> 101482.pem in /etc/ssl/certs
	I0219 04:45:22.259525    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0219 04:45:22.277508    4660 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /etc/ssl/certs/101482.pem (1708 bytes)
	I0219 04:45:22.338490    4660 start.go:303] post-start completed in 2.1209421s
	I0219 04:45:22.338490    4660 fix.go:57] fixHost completed within 19.4595395s
	I0219 04:45:22.338490    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:23.117185    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:23.117401    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:23.117475    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:24.219183    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:24.219183    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:24.222404    4660 main.go:141] libmachine: Using SSH client type: native
	I0219 04:45:24.223442    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd4b120] 0xd4de80 <nil>  [] 0s} 172.28.251.111 22 <nil> <nil>}
	I0219 04:45:24.223520    4660 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0219 04:45:24.365659    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 1676781924.358704487
	
	I0219 04:45:24.365759    4660 fix.go:207] guest clock: 1676781924.358704487
	I0219 04:45:24.365759    4660 fix.go:220] Guest: 2023-02-19 04:45:24.358704487 +0000 GMT Remote: 2023-02-19 04:45:22.3384904 +0000 GMT m=+100.451662301 (delta=2.020214087s)
	I0219 04:45:24.365965    4660 fix.go:191] guest clock delta is within tolerance: 2.020214087s
	I0219 04:45:24.365965    4660 start.go:83] releasing machines lock for "kubernetes-upgrade-803700", held for 21.4871741s
	I0219 04:45:24.365965    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:25.225143    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:25.225143    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:25.225143    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:26.486922    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:26.487013    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:26.491237    4660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0219 04:45:26.491237    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:26.500111    4660 ssh_runner.go:195] Run: cat /version.json
	I0219 04:45:26.500111    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-803700 ).state
	I0219 04:45:24.369862    3548 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0219 04:45:24.371491    3548 start.go:159] libmachine.API.Create for "docker-flags-045000" (driver="hyperv")
	I0219 04:45:24.371630    3548 client.go:168] LocalClient.Create starting
	I0219 04:45:24.371630    3548 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0219 04:45:24.372433    3548 main.go:141] libmachine: Decoding PEM data...
	I0219 04:45:24.372541    3548 main.go:141] libmachine: Parsing certificate...
	I0219 04:45:24.372731    3548 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0219 04:45:24.372731    3548 main.go:141] libmachine: Decoding PEM data...
	I0219 04:45:24.372731    3548 main.go:141] libmachine: Parsing certificate...
	I0219 04:45:24.372731    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0219 04:45:24.892724    3548 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0219 04:45:24.892724    3548 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:24.892724    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0219 04:45:25.620838    3548 main.go:141] libmachine: [stdout =====>] : False
	
	I0219 04:45:25.620838    3548 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:25.620921    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:45:26.236328    3548 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:45:26.236610    3548 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:26.236790    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:45:28.194448    3548 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:45:28.194448    3548 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:28.196610    3548 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0219 04:45:28.647568    3548 main.go:141] libmachine: Creating SSH key...
	I0219 04:45:28.801260    3548 main.go:141] libmachine: Creating VM...
	I0219 04:45:28.801260    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0219 04:45:26.198960   10072 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0219 04:45:26.199961   10072 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0219 04:45:26.206165   10072 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0219 04:45:26.206165   10072 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0219 04:45:26.206165   10072 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0219 04:45:26.206165   10072 ip.go:207] Found interface: {Index:11 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7f:a7:14 Flags:up|broadcast|multicast|running}
	I0219 04:45:26.209608   10072 ip.go:210] interface addr: fe80::8ff9:73c7:b894:c84f/64
	I0219 04:45:26.209608   10072 ip.go:210] interface addr: 172.28.240.1/20
	I0219 04:45:26.220688   10072 ssh_runner.go:195] Run: grep 172.28.240.1	host.minikube.internal$ /etc/hosts
	I0219 04:45:26.227752   10072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.240.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:45:26.253092   10072 localpath.go:92] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.crt -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\client.crt
	I0219 04:45:26.255092   10072 localpath.go:117] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.key -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\client.key
	I0219 04:45:26.256099   10072 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:45:26.264088   10072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:45:26.325308   10072 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:45:26.325308   10072 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:45:26.336834   10072 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:45:26.377865   10072 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0219 04:45:26.377865   10072 cache_images.go:84] Images are preloaded, skipping loading
	I0219 04:45:26.393021   10072 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0219 04:45:26.444348   10072 cni.go:84] Creating CNI manager for ""
	I0219 04:45:26.444348   10072 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 04:45:26.444348   10072 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0219 04:45:26.444348   10072 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.248.128 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-011800 NodeName:cert-expiration-011800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.248.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.248.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0219 04:45:26.444348   10072 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.248.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cert-expiration-011800"
	  kubeletExtraArgs:
	    node-ip: 172.28.248.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.248.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0219 04:45:26.444348   10072 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cert-expiration-011800 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.248.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:cert-expiration-011800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0219 04:45:26.458120   10072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0219 04:45:26.475184   10072 binaries.go:44] Found k8s binaries, skipping transfer
	I0219 04:45:26.490106   10072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0219 04:45:26.512820   10072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (456 bytes)
	I0219 04:45:26.557301   10072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0219 04:45:26.600709   10072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0219 04:45:26.652356   10072 ssh_runner.go:195] Run: grep 172.28.248.128	control-plane.minikube.internal$ /etc/hosts
	I0219 04:45:26.658355   10072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.248.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0219 04:45:26.687004   10072 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800 for IP: 172.28.248.128
	I0219 04:45:26.687004   10072 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:26.687690   10072 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0219 04:45:26.687690   10072 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0219 04:45:26.688972   10072 certs.go:311] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\client.key
	I0219 04:45:26.688972   10072 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.key.8c31ee89
	I0219 04:45:26.688972   10072 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.crt.8c31ee89 with IP's: [172.28.248.128 10.96.0.1 127.0.0.1 10.0.0.1]
	I0219 04:45:26.919782   10072 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.crt.8c31ee89 ...
	I0219 04:45:26.919782   10072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.crt.8c31ee89: {Name:mk468cb2c7f135d22e9c6216103659bfc7a59cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:26.921747   10072 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.key.8c31ee89 ...
	I0219 04:45:26.921747   10072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.key.8c31ee89: {Name:mk528ca6b49b060fe445470a615a20914f030973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:26.922736   10072 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.crt.8c31ee89 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.crt
	I0219 04:45:26.930724   10072 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.key.8c31ee89 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.key
	I0219 04:45:26.931733   10072 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.key
	I0219 04:45:26.931733   10072 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.crt with IP's: []
	I0219 04:45:27.150886   10072 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.crt ...
	I0219 04:45:27.151913   10072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.crt: {Name:mk46a9541717a8c4c48fb62fd11c57af94694897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:27.152898   10072 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.key ...
	I0219 04:45:27.152898   10072 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.key: {Name:mk0081be290b4b2689d1c3272c362e3d1c542d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0219 04:45:27.161897   10072 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem (1338 bytes)
	W0219 04:45:27.161897   10072 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148_empty.pem, impossibly tiny 0 bytes
	I0219 04:45:27.161897   10072 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0219 04:45:27.161897   10072 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0219 04:45:27.161897   10072 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0219 04:45:27.161897   10072 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0219 04:45:27.162916   10072 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem (1708 bytes)
	I0219 04:45:27.163890   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0219 04:45:27.211828   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0219 04:45:27.255580   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0219 04:45:27.318656   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-011800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0219 04:45:27.366861   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0219 04:45:27.426274   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0219 04:45:27.475537   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0219 04:45:27.526351   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0219 04:45:27.580858   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0219 04:45:27.635513   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\10148.pem --> /usr/share/ca-certificates/10148.pem (1338 bytes)
	I0219 04:45:27.685500   10072 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\101482.pem --> /usr/share/ca-certificates/101482.pem (1708 bytes)
	I0219 04:45:27.735554   10072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0219 04:45:27.780300   10072 ssh_runner.go:195] Run: openssl version
	I0219 04:45:27.800748   10072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0219 04:45:27.836746   10072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:45:27.845596   10072 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 19 03:17 /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:45:27.859555   10072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0219 04:45:27.886797   10072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0219 04:45:27.917844   10072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10148.pem && ln -fs /usr/share/ca-certificates/10148.pem /etc/ssl/certs/10148.pem"
	I0219 04:45:27.955796   10072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10148.pem
	I0219 04:45:27.964320   10072 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 19 03:26 /usr/share/ca-certificates/10148.pem
	I0219 04:45:27.977809   10072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10148.pem
	I0219 04:45:28.004027   10072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10148.pem /etc/ssl/certs/51391683.0"
	I0219 04:45:28.037248   10072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101482.pem && ln -fs /usr/share/ca-certificates/101482.pem /etc/ssl/certs/101482.pem"
	I0219 04:45:28.078248   10072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101482.pem
	I0219 04:45:28.086388   10072 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 19 03:26 /usr/share/ca-certificates/101482.pem
	I0219 04:45:28.098079   10072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101482.pem
	I0219 04:45:28.128311   10072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101482.pem /etc/ssl/certs/3ec20f2e.0"
	I0219 04:45:28.149606   10072 kubeadm.go:401] StartCluster: {Name:cert-expiration-011800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.26.1 ClusterName:cert-expiration-011800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.28.248.128 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 04:45:28.158890   10072 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0219 04:45:28.217523   10072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0219 04:45:28.250700   10072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0219 04:45:28.278672   10072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0219 04:45:28.300672   10072 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0219 04:45:28.300672   10072 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0219 04:45:28.647504   10072 kubeadm.go:322] W0219 04:45:28.638537    1496 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0219 04:45:29.325459   10072 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0219 04:45:27.296824    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:27.296824    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:27.296824    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:27.305491    4660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:45:27.305740    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:27.305740    4660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-803700 ).networkadapters[0]).ipaddresses[0]
	I0219 04:45:28.609564    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:28.609564    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:28.609564    4660 sshutil.go:53] new ssh client: &{IP:172.28.251.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\id_rsa Username:docker}
	I0219 04:45:28.655802    4660 main.go:141] libmachine: [stdout =====>] : 172.28.251.111
	
	I0219 04:45:28.655802    4660 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:28.655802    4660 sshutil.go:53] new ssh client: &{IP:172.28.251.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-803700\id_rsa Username:docker}
	I0219 04:45:28.726776    4660 ssh_runner.go:235] Completed: cat /version.json: (2.2266731s)
	I0219 04:45:28.740602    4660 ssh_runner.go:195] Run: systemctl --version
	I0219 04:45:28.805389    4660 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.31416s)
	I0219 04:45:28.822701    4660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0219 04:45:28.830626    4660 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0219 04:45:28.843657    4660 ssh_runner.go:195] Run: which cri-dockerd
	I0219 04:45:28.861810    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0219 04:45:28.884207    4660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0219 04:45:28.939294    4660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0219 04:45:28.977546    4660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0219 04:45:29.006647    4660 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0219 04:45:29.006647    4660 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 04:45:29.020104    4660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0219 04:45:29.058514    4660 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0219 04:45:29.058514    4660 docker.go:560] Images already preloaded, skipping extraction
	I0219 04:45:29.058514    4660 start.go:485] detecting cgroup driver to use...
	I0219 04:45:29.058514    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:45:29.113317    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0219 04:45:29.146733    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0219 04:45:29.168354    4660 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0219 04:45:29.179348    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0219 04:45:29.216310    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:45:29.255233    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0219 04:45:29.287105    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0219 04:45:29.330483    4660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0219 04:45:29.361224    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0219 04:45:29.390225    4660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0219 04:45:29.425961    4660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0219 04:45:29.452957    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:45:29.708884    4660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0219 04:45:29.748599    4660 start.go:485] detecting cgroup driver to use...
	I0219 04:45:29.757872    4660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0219 04:45:29.789183    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:45:29.821120    4660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0219 04:45:30.018163    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0219 04:45:30.063485    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0219 04:45:30.088504    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0219 04:45:30.133909    4660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0219 04:45:30.404284    4660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0219 04:45:30.665446    4660 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0219 04:45:30.665446    4660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0219 04:45:30.705406    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0219 04:45:30.989379    4660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0219 04:45:30.468610    3548 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0219 04:45:30.468610    3548 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:30.468610    3548 main.go:141] libmachine: Using switch "Default Switch"
	I0219 04:45:30.468610    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0219 04:45:31.191744    3548 main.go:141] libmachine: [stdout =====>] : True
	
	I0219 04:45:31.191955    3548 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:31.192038    3548 main.go:141] libmachine: Creating VHD
	I0219 04:45:31.192092    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-045000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0219 04:45:32.984354    3548 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-045000\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FDA21285-2199-469B-B7C6-D820A960DE23
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0219 04:45:32.984354    3548 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:45:32.984354    3548 main.go:141] libmachine: Writing magic tar header
	I0219 04:45:32.984354    3548 main.go:141] libmachine: Writing SSH key tar header
	I0219 04:45:32.996349    3548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-045000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\docker-flags-045000\disk.vhd' -VHDType Dynamic -DeleteSource
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-02-19 04:40:04 UTC, ends at Sun 2023-02-19 04:45:39 UTC. --
	Feb 19 04:44:47 pause-061400 dockerd[5477]: time="2023-02-19T04:44:47.151467615Z" level=warning msg="cleanup warnings time=\"2023-02-19T04:44:47Z\" level=info msg=\"starting signal loop\" namespace=moby pid=7994 runtime=io.containerd.runc.v2\n"
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.967716322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.967856021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.967875321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:49 pause-061400 dockerd[5477]: time="2023-02-19T04:44:49.970165814Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f6fd40251a2cd10d00add36019ae5440f8b22340454e36cac5db6c0a8d14de5a pid=8251 runtime=io.containerd.runc.v2
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031507720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031607820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031638120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031948619Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1ea4dbac71cbdbcf3c31ce965eb41575e22631dedaea2d6968d45f9ace4730d pid=8295 runtime=io.containerd.runc.v2
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.031292121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.033345815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.033428614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:50 pause-061400 dockerd[5477]: time="2023-02-19T04:44:50.034351811Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/175333ff2e0487da9992eec10020ddf1ed0f5ac49cd410fcad052c6d1b6aaae0 pid=8291 runtime=io.containerd.runc.v2
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.322916347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.322973947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.322986547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.323187246Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/eea008990fc5459c7dc4700c8c5b0fe979de298d2eaa68b2d5f5757cb63959d6 pid=8473 runtime=io.containerd.runc.v2
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.808387266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.808457566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.808471165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:56 pause-061400 dockerd[5477]: time="2023-02-19T04:44:56.811328557Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1ac745449796840c61516af32333a6452ed4f81ef04f4bd42071ef043d237531 pid=8565 runtime=io.containerd.runc.v2
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736117159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736178659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736207859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 19 04:44:57 pause-061400 dockerd[5477]: time="2023-02-19T04:44:57.736729258Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a13216268eec03b0be16bee037cde662a01c41c63fab4cbba7bd36383004165f pid=8674 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a13216268eec0       5185b96f0becf       42 seconds ago       Running             coredns                   2                   1ac7454497968
	eea008990fc54       46a6bb3c77ce0       43 seconds ago       Running             kube-proxy                2                   45ac4e42f0ee5
	175333ff2e048       655493523f607       50 seconds ago       Running             kube-scheduler            3                   42ece85db173c
	f1ea4dbac71cb       e9c08e11b07f6       50 seconds ago       Running             kube-controller-manager   3                   f41416dad1e91
	f6fd40251a2cd       fce326961ae2d       50 seconds ago       Running             etcd                      3                   abd19d387f579
	2b3c01bd2cb0b       deb04688c4a35       54 seconds ago       Running             kube-apiserver            3                   3e7d944f36e27
	7e146eb9b7480       5185b96f0becf       About a minute ago   Exited              coredns                   1                   a0afef71105f2
	fa50d192f6494       655493523f607       About a minute ago   Exited              kube-scheduler            2                   6da63f0b880ee
	c68aa6c91f3f1       deb04688c4a35       About a minute ago   Exited              kube-apiserver            2                   238e4cc4e6973
	c80aca6e1a30a       46a6bb3c77ce0       About a minute ago   Exited              kube-proxy                1                   20ba1b6e44821
	80b34d8effdc5       fce326961ae2d       About a minute ago   Exited              etcd                      2                   864e078083a69
	9a8d52471ec08       e9c08e11b07f6       About a minute ago   Exited              kube-controller-manager   2                   15064c3c6813d
	
	* 
	* ==> coredns [7e146eb9b748] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = dc373b1a880fdd4ccb700cff30600cc4bf8c50378309c853254a8500867351a3e9142cc9578843a443961b28e6690d646b490f89e043822a41fbe79aabc9a951
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:46953 - 36995 "HINFO IN 1726034603133606588.8159892721735991185. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032888463s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [a13216268eec] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = dc373b1a880fdd4ccb700cff30600cc4bf8c50378309c853254a8500867351a3e9142cc9578843a443961b28e6690d646b490f89e043822a41fbe79aabc9a951
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:38635 - 39239 "HINFO IN 548711931721667633.127040790714027977. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.026858626s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-061400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-061400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b522747fea7d12101d906a75c46b71d9d9f96e61
	                    minikube.k8s.io/name=pause-061400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_19T04_41_33_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Feb 2023 04:41:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-061400
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Feb 2023 04:45:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Feb 2023 04:44:54 +0000   Sun, 19 Feb 2023 04:41:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.246.210
	  Hostname:    pause-061400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 d23f6a8ecf094da2bb8bca5e6922005a
	  System UUID:                a3845a9c-434a-d844-a7a5-67e7ad1bb4c1
	  Boot ID:                    a4f2da02-6178-4141-9bba-a2a84c6dfa59
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-mjptj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m53s
	  kube-system                 etcd-pause-061400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-apiserver-pause-061400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-controller-manager-pause-061400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-proxy-mgb72                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-scheduler-pause-061400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  Starting                 42s                    kube-proxy       
	  Normal  Starting                 66s                    kube-proxy       
	  Normal  NodeHasSufficientPID     4m21s (x6 over 4m21s)  kubelet          Node pause-061400 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m21s (x6 over 4m21s)  kubelet          Node pause-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m21s (x7 over 4m21s)  kubelet          Node pause-061400 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s                   kubelet          Node pause-061400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s                   kubelet          Node pause-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s                   kubelet          Node pause-061400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m3s                   kubelet          Node pause-061400 status is now: NodeReady
	  Normal  RegisteredNode           3m53s                  node-controller  Node pause-061400 event: Registered Node pause-061400 in Controller
	  Normal  Starting                 51s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)      kubelet          Node pause-061400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)      kubelet          Node pause-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)      kubelet          Node pause-061400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           32s                    node-controller  Node pause-061400 event: Registered Node pause-061400 in Controller
	
	* 
	* ==> dmesg <==
	* [  +2.355292] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.723036] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +0.530854] systemd-fstab-generator[1122]: Ignoring "noauto" for root device
	[  +0.192737] systemd-fstab-generator[1133]: Ignoring "noauto" for root device
	[  +0.199434] systemd-fstab-generator[1146]: Ignoring "noauto" for root device
	[  +1.709069] systemd-fstab-generator[1293]: Ignoring "noauto" for root device
	[  +0.185143] systemd-fstab-generator[1304]: Ignoring "noauto" for root device
	[  +0.184794] systemd-fstab-generator[1315]: Ignoring "noauto" for root device
	[  +0.178472] systemd-fstab-generator[1326]: Ignoring "noauto" for root device
	[  +6.410789] systemd-fstab-generator[1572]: Ignoring "noauto" for root device
	[  +0.907289] kauditd_printk_skb: 68 callbacks suppressed
	[ +15.082353] systemd-fstab-generator[2460]: Ignoring "noauto" for root device
	[ +15.612353] kauditd_printk_skb: 8 callbacks suppressed
	[Feb19 04:43] systemd-fstab-generator[4641]: Ignoring "noauto" for root device
	[  +0.491028] systemd-fstab-generator[4673]: Ignoring "noauto" for root device
	[  +0.221651] systemd-fstab-generator[4684]: Ignoring "noauto" for root device
	[  +0.279987] systemd-fstab-generator[4704]: Ignoring "noauto" for root device
	[  +5.274354] kauditd_printk_skb: 21 callbacks suppressed
	[Feb19 04:44] systemd-fstab-generator[5952]: Ignoring "noauto" for root device
	[  +0.271137] systemd-fstab-generator[5989]: Ignoring "noauto" for root device
	[  +0.263398] systemd-fstab-generator[6044]: Ignoring "noauto" for root device
	[  +0.261800] systemd-fstab-generator[6070]: Ignoring "noauto" for root device
	[  +6.269630] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.734267] kauditd_printk_skb: 6 callbacks suppressed
	[ +19.349897] systemd-fstab-generator[8075]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [80b34d8effdc] <==
	* {"level":"info","ts":"2023-02-19T04:44:40.845Z","caller":"traceutil/trace.go:171","msg":"trace[166448543] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:500; }","duration":"362.027172ms","start":"2023-02-19T04:44:40.483Z","end":"2023-02-19T04:44:40.845Z","steps":["trace[166448543] 'read index received'  (duration: 203.283154ms)","trace[166448543] 'applied index is now lower than readState.Index'  (duration: 158.742618ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.181Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"231.618861ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7521159158890045686 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520330e30183d\" mod_revision:449 > success:<request_put:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520330e30183d\" value_size:584 lease:7521159158890045563 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520330e30183d\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:44:41.181Z","caller":"traceutil/trace.go:171","msg":"trace[1886732854] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"243.408818ms","start":"2023-02-19T04:44:40.938Z","end":"2023-02-19T04:44:41.181Z","steps":["trace[1886732854] 'read index received'  (duration: 11.528458ms)","trace[1886732854] 'applied index is now lower than readState.Index'  (duration: 231.87936ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.181Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"271.107316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:node-controller\" ","response":"range_response_count:1 size:835"}
	{"level":"info","ts":"2023-02-19T04:44:41.181Z","caller":"traceutil/trace.go:171","msg":"trace[1281564352] range","detail":"{range_begin:/registry/clusterroles/system:controller:node-controller; range_end:; response_count:1; response_revision:457; }","duration":"271.129816ms","start":"2023-02-19T04:44:40.910Z","end":"2023-02-19T04:44:41.181Z","steps":["trace[1281564352] 'agreement among raft nodes before linearized reading'  (duration: 270.965317ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:44:41.182Z","caller":"traceutil/trace.go:171","msg":"trace[1107592853] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"272.572311ms","start":"2023-02-19T04:44:40.909Z","end":"2023-02-19T04:44:41.182Z","steps":["trace[1107592853] 'process raft request'  (duration: 40.188953ms)","trace[1107592853] 'compare'  (duration: 229.01347ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"473.156692ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7521159158890045690 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" value_size:890 lease:7521159158890045563 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[681197953] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:506; }","duration":"603.88702ms","start":"2023-02-19T04:44:41.194Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[681197953] 'read index received'  (duration: 130.462829ms)","trace[681197953] 'applied index is now lower than readState.Index'  (duration: 473.422491ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[2047180931] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"604.285418ms","start":"2023-02-19T04:44:41.194Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[2047180931] 'process raft request'  (duration: 130.842028ms)","trace[2047180931] 'compare'  (duration: 472.822793ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:44:41.194Z","time spent":"604.353318ms","remote":"127.0.0.1:43210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":978,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-787d4945fb-mjptj.174520335c268976\" value_size:890 lease:7521159158890045563 >> failure:<>"}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"494.181316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"604.507317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:pod-garbage-collector\" ","response":"range_response_count:1 size:663"}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[1097732361] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:458; }","duration":"494.254915ms","start":"2023-02-19T04:44:41.304Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[1097732361] 'agreement among raft nodes before linearized reading'  (duration: 494.127516ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-19T04:44:41.798Z","caller":"traceutil/trace.go:171","msg":"trace[650200663] range","detail":"{range_begin:/registry/clusterroles/system:controller:pod-garbage-collector; range_end:; response_count:1; response_revision:458; }","duration":"604.538317ms","start":"2023-02-19T04:44:41.194Z","end":"2023-02-19T04:44:41.798Z","steps":["trace[650200663] 'agreement among raft nodes before linearized reading'  (duration: 604.303518ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:44:41.304Z","time spent":"494.314315ms","remote":"127.0.0.1:43186","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-02-19T04:44:41.798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:44:41.194Z","time spent":"604.579617ms","remote":"127.0.0.1:43272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":686,"request content":"key:\"/registry/clusterroles/system:controller:pod-garbage-collector\" "}
	{"level":"info","ts":"2023-02-19T04:44:41.965Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-19T04:44:41.965Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-061400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.246.210:2380"],"advertise-client-urls":["https://172.28.246.210:2379"]}
	{"level":"warn","ts":"2023-02-19T04:44:42.087Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.522742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:replication-controller\" ","response":"","error":"rangeKeys: context cancelled: context canceled"}
	{"level":"info","ts":"2023-02-19T04:44:42.087Z","caller":"traceutil/trace.go:171","msg":"trace[1152573996] range","detail":"{range_begin:/registry/clusterroles/system:controller:replication-controller; range_end:; }","duration":"183.611742ms","start":"2023-02-19T04:44:41.903Z","end":"2023-02-19T04:44:42.087Z","steps":["trace[1152573996] 'range keys from in-memory index tree'  (duration: 183.424543ms)"],"step_count":1}
	WARNING: 2023/02/19 04:44:42 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-02-19T04:44:42.103Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f3a8ca31a8b26860","current-leader-member-id":"f3a8ca31a8b26860"}
	{"level":"info","ts":"2023-02-19T04:44:42.281Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:42.283Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:42.283Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-061400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.246.210:2380"],"advertise-client-urls":["https://172.28.246.210:2379"]}
	
	* 
	* ==> etcd [f6fd40251a2c] <==
	* {"level":"info","ts":"2023-02-19T04:44:51.204Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.28.246.210:2380"}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f3a8ca31a8b26860","initial-advertise-peer-urls":["https://172.28.246.210:2380"],"listen-peer-urls":["https://172.28.246.210:2380"],"advertise-client-urls":["https://172.28.246.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.246.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-19T04:44:51.206Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 is starting a new election at term 4"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 received MsgPreVoteResp from f3a8ca31a8b26860 at term 4"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 became candidate at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 received MsgVoteResp from f3a8ca31a8b26860 at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f3a8ca31a8b26860 became leader at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f3a8ca31a8b26860 elected leader f3a8ca31a8b26860 at term 5"}
	{"level":"info","ts":"2023-02-19T04:44:52.121Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"f3a8ca31a8b26860","local-member-attributes":"{Name:pause-061400 ClientURLs:[https://172.28.246.210:2379]}","request-path":"/0/members/f3a8ca31a8b26860/attributes","cluster-id":"5f814601d2eff1a5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-19T04:44:52.121Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-19T04:44:52.123Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-19T04:44:52.123Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-19T04:44:52.124Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-19T04:44:52.124Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-19T04:44:52.141Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.28.246.210:2379"}
	{"level":"info","ts":"2023-02-19T04:45:15.378Z","caller":"traceutil/trace.go:171","msg":"trace[544152551] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"166.54475ms","start":"2023-02-19T04:45:15.212Z","end":"2023-02-19T04:45:15.378Z","steps":["trace[544152551] 'process raft request'  (duration: 166.231051ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-19T04:45:15.738Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"147.653989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7521159158896142680 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-061400\" mod_revision:520 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-061400\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-061400\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-19T04:45:15.738Z","caller":"traceutil/trace.go:171","msg":"trace[26628075] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"320.918225ms","start":"2023-02-19T04:45:15.417Z","end":"2023-02-19T04:45:15.738Z","steps":["trace[26628075] 'process raft request'  (duration: 172.192638ms)","trace[26628075] 'compare'  (duration: 147.31369ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-19T04:45:15.738Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-19T04:45:15.417Z","time spent":"321.200925ms","remote":"127.0.0.1:43986","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-061400\" mod_revision:520 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-061400\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-061400\" > >"}
	{"level":"warn","ts":"2023-02-19T04:45:16.104Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.394455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2023-02-19T04:45:16.104Z","caller":"traceutil/trace.go:171","msg":"trace[832395901] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:536; }","duration":"118.947254ms","start":"2023-02-19T04:45:15.985Z","end":"2023-02-19T04:45:16.104Z","steps":["trace[832395901] 'range keys from in-memory index tree'  (duration: 118.030255ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:45:39 up 5 min,  0 users,  load average: 1.32, 0.96, 0.45
	Linux pause-061400 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2b3c01bd2cb0] <==
	* I0219 04:44:54.534398       1 establishing_controller.go:76] Starting EstablishingController
	I0219 04:44:54.534476       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0219 04:44:54.534506       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0219 04:44:54.534520       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0219 04:44:54.523577       1 autoregister_controller.go:141] Starting autoregister controller
	I0219 04:44:54.587496       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0219 04:44:54.692317       1 cache.go:39] Caches are synced for autoregister controller
	I0219 04:44:54.726753       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0219 04:44:54.727595       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0219 04:44:54.728503       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0219 04:44:54.728737       1 shared_informer.go:280] Caches are synced for configmaps
	I0219 04:44:54.729008       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0219 04:44:54.729153       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0219 04:44:54.729487       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0219 04:44:54.743658       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0219 04:44:54.767417       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0219 04:44:55.140185       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0219 04:44:55.535669       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0219 04:44:56.347057       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0219 04:44:56.388131       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0219 04:44:56.456835       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0219 04:44:56.541826       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0219 04:44:56.581082       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0219 04:45:07.245454       1 controller.go:615] quota admission added evaluator for: endpoints
	I0219 04:45:07.258392       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [c68aa6c91f3f] <==
	* W0219 04:44:42.996372       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0219 04:44:42.996407       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0219 04:44:42.996452       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0219 04:44:43.200574       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [9a8d52471ec0] <==
	* I0219 04:44:26.309688       1 serving.go:348] Generated self-signed cert in-memory
	I0219 04:44:27.391352       1 controllermanager.go:182] Version: v1.26.1
	I0219 04:44:27.391408       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:27.393674       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0219 04:44:27.394852       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0219 04:44:27.394974       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0219 04:44:27.395071       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [f1ea4dbac71c] <==
	* I0219 04:45:07.181192       1 shared_informer.go:280] Caches are synced for certificate-csrapproving
	I0219 04:45:07.181486       1 shared_informer.go:280] Caches are synced for deployment
	I0219 04:45:07.182702       1 shared_informer.go:280] Caches are synced for taint
	I0219 04:45:07.182996       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0219 04:45:07.183215       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-061400. Assuming now as a timestamp.
	I0219 04:45:07.183424       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0219 04:45:07.184104       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0219 04:45:07.184394       1 taint_manager.go:211] "Sending events to api server"
	I0219 04:45:07.184795       1 event.go:294] "Event occurred" object="pause-061400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-061400 event: Registered Node pause-061400 in Controller"
	I0219 04:45:07.185129       1 shared_informer.go:280] Caches are synced for ephemeral
	I0219 04:45:07.197214       1 shared_informer.go:280] Caches are synced for crt configmap
	I0219 04:45:07.200538       1 shared_informer.go:280] Caches are synced for endpoint
	I0219 04:45:07.205416       1 shared_informer.go:280] Caches are synced for stateful set
	I0219 04:45:07.220819       1 shared_informer.go:280] Caches are synced for disruption
	I0219 04:45:07.222408       1 shared_informer.go:280] Caches are synced for PV protection
	I0219 04:45:07.226391       1 shared_informer.go:280] Caches are synced for namespace
	I0219 04:45:07.229297       1 shared_informer.go:280] Caches are synced for job
	I0219 04:45:07.230776       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0219 04:45:07.270787       1 shared_informer.go:280] Caches are synced for cronjob
	I0219 04:45:07.307470       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0219 04:45:07.310218       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:45:07.335851       1 shared_informer.go:280] Caches are synced for resource quota
	I0219 04:45:07.771179       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:45:07.801900       1 shared_informer.go:280] Caches are synced for garbage collector
	I0219 04:45:07.805194       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [c80aca6e1a30] <==
	* I0219 04:44:32.700126       1 node.go:163] Successfully retrieved node IP: 172.28.246.210
	I0219 04:44:32.717869       1 server_others.go:109] "Detected node IP" address="172.28.246.210"
	I0219 04:44:32.718149       1 server_others.go:535] "Using iptables proxy"
	I0219 04:44:32.824814       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:44:32.824848       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:44:32.824893       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:44:32.826738       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:44:32.826763       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:32.827894       1 config.go:317] "Starting service config controller"
	I0219 04:44:32.828114       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:44:32.828160       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:44:32.828171       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:44:32.832968       1 config.go:444] "Starting node config controller"
	I0219 04:44:32.833426       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:44:32.929309       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:44:32.929397       1 shared_informer.go:280] Caches are synced for service config
	I0219 04:44:32.934166       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [eea008990fc5] <==
	* I0219 04:44:56.557703       1 node.go:163] Successfully retrieved node IP: 172.28.246.210
	I0219 04:44:56.560173       1 server_others.go:109] "Detected node IP" address="172.28.246.210"
	I0219 04:44:56.560207       1 server_others.go:535] "Using iptables proxy"
	I0219 04:44:56.630773       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0219 04:44:56.630918       1 server_others.go:176] "Using iptables Proxier"
	I0219 04:44:56.630962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0219 04:44:56.632036       1 server.go:655] "Version info" version="v1.26.1"
	I0219 04:44:56.632179       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:56.634073       1 config.go:317] "Starting service config controller"
	I0219 04:44:56.634754       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0219 04:44:56.635115       1 config.go:226] "Starting endpoint slice config controller"
	I0219 04:44:56.641286       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0219 04:44:56.641325       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0219 04:44:56.635412       1 config.go:444] "Starting node config controller"
	I0219 04:44:56.641349       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0219 04:44:56.641357       1 shared_informer.go:280] Caches are synced for node config
	I0219 04:44:56.736375       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [175333ff2e04] <==
	* I0219 04:44:51.505158       1 serving.go:348] Generated self-signed cert in-memory
	W0219 04:44:54.609312       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0219 04:44:54.609598       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0219 04:44:54.609825       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0219 04:44:54.610081       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0219 04:44:54.689915       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0219 04:44:54.690186       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:54.697476       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0219 04:44:54.697955       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:44:54.702339       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0219 04:44:54.704747       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0219 04:44:54.799723       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fa50d192f649] <==
	* I0219 04:44:33.418639       1 serving.go:348] Generated self-signed cert in-memory
	I0219 04:44:34.119951       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0219 04:44:34.120055       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0219 04:44:34.556044       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0219 04:44:34.556141       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0219 04:44:34.556154       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0219 04:44:34.556175       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0219 04:44:34.570896       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0219 04:44:34.570940       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:44:34.570967       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0219 04:44:34.575146       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0219 04:44:34.660454       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0219 04:44:34.671981       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0219 04:44:34.676168       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0219 04:44:42.112564       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0219 04:44:42.112792       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-02-19 04:40:04 UTC, ends at Sun 2023-02-19 04:45:40 UTC. --
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523153    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5cf53c7218b7fd176d54da72155dd87-kubeconfig\") pod \"kube-scheduler-pause-061400\" (UID: \"f5cf53c7218b7fd176d54da72155dd87\") " pod="kube-system/kube-scheduler-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523190    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff31bf68418bde452fbbe0538f99857f-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-061400\" (UID: \"ff31bf68418bde452fbbe0538f99857f\") " pod="kube-system/kube-controller-manager-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523287    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff31bf68418bde452fbbe0538f99857f-flexvolume-dir\") pod \"kube-controller-manager-pause-061400\" (UID: \"ff31bf68418bde452fbbe0538f99857f\") " pod="kube-system/kube-controller-manager-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.523321    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff31bf68418bde452fbbe0538f99857f-ca-certs\") pod \"kube-controller-manager-pause-061400\" (UID: \"ff31bf68418bde452fbbe0538f99857f\") " pod="kube-system/kube-controller-manager-pause-061400"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.761597    8081 scope.go:115] "RemoveContainer" containerID="80b34d8effdc53dce2993fe2eb94ea4e0f03afd2698b5acb770ce74b0f89fc6b"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.788749    8081 scope.go:115] "RemoveContainer" containerID="9a8d52471ec08e832ccf7f49afd53644e9754a3e6a868534ba87878254edadf8"
	Feb 19 04:44:49 pause-061400 kubelet[8081]: I0219 04:44:49.801113    8081 scope.go:115] "RemoveContainer" containerID="fa50d192f649422c09a8d323bc4091750b765119cbfe8a9b8606ea1f6351f702"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.747846    8081 kubelet_node_status.go:108] "Node was previously registered" node="pause-061400"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.748008    8081 kubelet_node_status.go:73] "Successfully registered node" node="pause-061400"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.751753    8081 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.753364    8081 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.965098    8081 apiserver.go:52] "Watching apiserver"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.968064    8081 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:44:54 pause-061400 kubelet[8081]: I0219 04:44:54.968331    8081 topology_manager.go:210] "Topology Admit Handler"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.005196    8081 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072808    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305ace80-8a26-4015-a001-8b39b2b2a3ec-config-volume\") pod \"coredns-787d4945fb-mjptj\" (UID: \"305ace80-8a26-4015-a001-8b39b2b2a3ec\") " pod="kube-system/coredns-787d4945fb-mjptj"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072894    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86vw6\" (UniqueName: \"kubernetes.io/projected/305ace80-8a26-4015-a001-8b39b2b2a3ec-kube-api-access-86vw6\") pod \"coredns-787d4945fb-mjptj\" (UID: \"305ace80-8a26-4015-a001-8b39b2b2a3ec\") " pod="kube-system/coredns-787d4945fb-mjptj"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072931    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df76445b-2fa1-405c-9cd6-46a18b28ef95-kube-proxy\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072957    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df76445b-2fa1-405c-9cd6-46a18b28ef95-xtables-lock\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.072983    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df76445b-2fa1-405c-9cd6-46a18b28ef95-lib-modules\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.073043    8081 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslrv\" (UniqueName: \"kubernetes.io/projected/df76445b-2fa1-405c-9cd6-46a18b28ef95-kube-api-access-jslrv\") pod \"kube-proxy-mgb72\" (UID: \"df76445b-2fa1-405c-9cd6-46a18b28ef95\") " pod="kube-system/kube-proxy-mgb72"
	Feb 19 04:44:55 pause-061400 kubelet[8081]: I0219 04:44:55.073060    8081 reconciler.go:41] "Reconciler: start to sync state"
	Feb 19 04:44:56 pause-061400 kubelet[8081]: I0219 04:44:56.169421    8081 scope.go:115] "RemoveContainer" containerID="c80aca6e1a30ae7bb7e28f355b8c0ce0351a8d9ce6ed85d3fd3a1c1648dbd60d"
	Feb 19 04:44:56 pause-061400 kubelet[8081]: I0219 04:44:56.262564    8081 request.go:690] Waited for 1.087518469s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token
	Feb 19 04:44:57 pause-061400 kubelet[8081]: I0219 04:44:57.542370    8081 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac745449796840c61516af32333a6452ed4f81ef04f4bd42071ef043d237531"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061400 -n pause-061400
E0219 04:45:41.834389   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061400 -n pause-061400: (5.7029015s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-061400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (234.97s)

                                                
                                    

Test pass (257/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.75
4 TestDownloadOnly/v1.16.0/preload-exists 0.06
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.26.1/json-events 8.42
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.6
16 TestDownloadOnly/DeleteAll 1.54
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.45
19 TestBinaryMirror 3.3
20 TestOffline 267.23
22 TestAddons/Setup 284.58
24 TestAddons/parallel/Registry 23.15
25 TestAddons/parallel/Ingress 38.06
26 TestAddons/parallel/MetricsServer 9.4
27 TestAddons/parallel/HelmTiller 19.96
29 TestAddons/parallel/CSI 61.88
30 TestAddons/parallel/Headlamp 19.56
31 TestAddons/parallel/CloudSpanner 8.64
34 TestAddons/serial/GCPAuth/Namespaces 0.48
35 TestAddons/StoppedEnableDisable 27.78
36 TestCertOptions 270.65
37 TestCertExpiration 623.23
38 TestDockerFlags 228.44
39 TestForceSystemdFlag 175.42
40 TestForceSystemdEnv 200.17
45 TestErrorSpam/setup 115.92
46 TestErrorSpam/start 5.91
47 TestErrorSpam/status 14.53
48 TestErrorSpam/pause 9.65
49 TestErrorSpam/unpause 9.72
50 TestErrorSpam/stop 36.85
53 TestFunctional/serial/CopySyncFile 0.03
54 TestFunctional/serial/StartWithProxy 132.26
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 74.76
57 TestFunctional/serial/KubeContext 0.17
58 TestFunctional/serial/KubectlGetPods 0.31
61 TestFunctional/serial/CacheCmd/cache/add_remote 12.79
62 TestFunctional/serial/CacheCmd/cache/add_local 6.06
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.26
64 TestFunctional/serial/CacheCmd/cache/list 0.28
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.56
66 TestFunctional/serial/CacheCmd/cache/cache_reload 14.19
67 TestFunctional/serial/CacheCmd/cache/delete 0.52
68 TestFunctional/serial/MinikubeKubectlCmd 0.52
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.57
70 TestFunctional/serial/ExtraConfig 77.09
71 TestFunctional/serial/ComponentHealth 0.24
72 TestFunctional/serial/LogsCmd 4.34
73 TestFunctional/serial/LogsFileCmd 5.03
75 TestFunctional/parallel/ConfigCmd 1.85
77 TestFunctional/parallel/DryRun 5.83
78 TestFunctional/parallel/InternationalLanguage 2.08
79 TestFunctional/parallel/StatusCmd 15.04
82 TestFunctional/parallel/ServiceCmd 42.2
83 TestFunctional/parallel/ServiceCmdConnect 17.9
84 TestFunctional/parallel/AddonsCmd 0.85
85 TestFunctional/parallel/PersistentVolumeClaim 36.72
87 TestFunctional/parallel/SSHCmd 7.09
88 TestFunctional/parallel/CpCmd 15.39
89 TestFunctional/parallel/MySQL 48.69
90 TestFunctional/parallel/FileSync 3.8
91 TestFunctional/parallel/CertSync 24.04
95 TestFunctional/parallel/NodeLabels 0.31
97 TestFunctional/parallel/NonActiveRuntimeDisabled 4.05
99 TestFunctional/parallel/License 2.59
100 TestFunctional/parallel/ProfileCmd/profile_not_create 4.34
101 TestFunctional/parallel/ProfileCmd/profile_list 3.77
102 TestFunctional/parallel/ProfileCmd/profile_json_output 3.8
103 TestFunctional/parallel/Version/short 0.36
104 TestFunctional/parallel/Version/components 4.17
105 TestFunctional/parallel/ImageCommands/ImageListShort 3.08
106 TestFunctional/parallel/ImageCommands/ImageListTable 3.03
107 TestFunctional/parallel/ImageCommands/ImageListJson 3.24
108 TestFunctional/parallel/ImageCommands/ImageListYaml 3.13
109 TestFunctional/parallel/ImageCommands/ImageBuild 13.91
110 TestFunctional/parallel/ImageCommands/Setup 3.23
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.69
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 12.27
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 10.41
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/DockerEnv/powershell 17.63
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.21
125 TestFunctional/parallel/UpdateContextCmd/no_changes 1.34
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 1.05
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 1.06
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.71
129 TestFunctional/parallel/ImageCommands/ImageRemove 6.93
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 8.83
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.46
132 TestFunctional/delete_addon-resizer_images 0.65
133 TestFunctional/delete_my-image_image 0.22
134 TestFunctional/delete_minikube_cached_images 0.23
138 TestImageBuild/serial/NormalBuild 5.15
139 TestImageBuild/serial/BuildWithBuildArg 6.41
140 TestImageBuild/serial/BuildWithDockerIgnore 3.56
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 3.91
144 TestIngressAddonLegacy/StartLegacyK8sCluster 137.15
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 25.27
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 3.22
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 49.46
151 TestJSONOutput/start/Command 131.67
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 3.59
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 3.47
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 24.37
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 1.49
179 TestMainNoArgs 0.3
180 TestMinikubeProfile 320.81
183 TestMountStart/serial/StartWithMountFirst 75.88
184 TestMountStart/serial/VerifyMountFirst 3.67
185 TestMountStart/serial/StartWithMountSecond 76.47
186 TestMountStart/serial/VerifyMountSecond 3.61
187 TestMountStart/serial/DeleteFirst 12.41
188 TestMountStart/serial/VerifyMountPostDelete 3.5
189 TestMountStart/serial/Stop 10.81
190 TestMountStart/serial/RestartStopped 63.35
191 TestMountStart/serial/VerifyMountPostStop 3.76
194 TestMultiNode/serial/FreshStart2Nodes 261.92
195 TestMultiNode/serial/DeployApp2Nodes 10.53
197 TestMultiNode/serial/AddNode 128.52
198 TestMultiNode/serial/ProfileList 3.09
199 TestMultiNode/serial/CopyFile 137.11
200 TestMultiNode/serial/StopNode 31.11
201 TestMultiNode/serial/StartAfterStop 91.65
203 TestMultiNode/serial/DeleteNode 36.35
204 TestMultiNode/serial/StopMultiNode 46.91
205 TestMultiNode/serial/RestartMultiNode 191.16
206 TestMultiNode/serial/ValidateNameConflict 150.98
210 TestPreload 317.74
211 TestScheduledStopWindows 220.45
218 TestKubernetesUpgrade 803.49
221 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
234 TestStoppedBinaryUpgrade/Setup 0.78
244 TestPause/serial/Start 154.63
246 TestStoppedBinaryUpgrade/MinikubeLogs 5.59
247 TestNetworkPlugins/group/auto/Start 208.92
248 TestNetworkPlugins/group/kindnet/Start 240.85
249 TestNetworkPlugins/group/calico/Start 291.38
250 TestNetworkPlugins/group/auto/KubeletFlags 4.08
251 TestNetworkPlugins/group/auto/NetCatPod 15.84
252 TestNetworkPlugins/group/auto/DNS 0.47
253 TestNetworkPlugins/group/auto/Localhost 0.5
254 TestNetworkPlugins/group/auto/HairPin 0.42
255 TestNetworkPlugins/group/custom-flannel/Start 225.75
256 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
257 TestNetworkPlugins/group/kindnet/KubeletFlags 4.19
258 TestNetworkPlugins/group/kindnet/NetCatPod 23.24
259 TestNetworkPlugins/group/kindnet/DNS 0.42
260 TestNetworkPlugins/group/kindnet/Localhost 0.42
261 TestNetworkPlugins/group/kindnet/HairPin 0.4
262 TestNetworkPlugins/group/false/Start 165.99
263 TestNetworkPlugins/group/calico/ControllerPod 5.04
264 TestNetworkPlugins/group/calico/KubeletFlags 4.92
265 TestNetworkPlugins/group/calico/NetCatPod 17.86
266 TestNetworkPlugins/group/custom-flannel/KubeletFlags 4.25
267 TestNetworkPlugins/group/calico/DNS 0.48
268 TestNetworkPlugins/group/calico/Localhost 0.46
269 TestNetworkPlugins/group/calico/HairPin 0.41
270 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.73
271 TestNetworkPlugins/group/custom-flannel/DNS 0.45
272 TestNetworkPlugins/group/custom-flannel/Localhost 0.38
273 TestNetworkPlugins/group/custom-flannel/HairPin 0.41
274 TestNetworkPlugins/group/enable-default-cni/Start 191.36
275 TestNetworkPlugins/group/false/KubeletFlags 4.36
276 TestNetworkPlugins/group/false/NetCatPod 17.99
277 TestNetworkPlugins/group/false/DNS 0.44
278 TestNetworkPlugins/group/false/Localhost 0.4
279 TestNetworkPlugins/group/false/HairPin 0.39
280 TestNetworkPlugins/group/flannel/Start 165.54
281 TestNetworkPlugins/group/bridge/Start 218.98
282 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 4.02
283 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.74
284 TestNetworkPlugins/group/enable-default-cni/DNS 0.42
285 TestNetworkPlugins/group/enable-default-cni/Localhost 0.38
286 TestNetworkPlugins/group/enable-default-cni/HairPin 0.39
287 TestNetworkPlugins/group/kubenet/Start 211.37
288 TestNetworkPlugins/group/flannel/ControllerPod 5.22
289 TestNetworkPlugins/group/flannel/KubeletFlags 6.19
290 TestNetworkPlugins/group/flannel/NetCatPod 20.42
291 TestNetworkPlugins/group/flannel/DNS 0.51
292 TestNetworkPlugins/group/flannel/Localhost 0.49
293 TestNetworkPlugins/group/flannel/HairPin 0.47
294 TestNetworkPlugins/group/bridge/KubeletFlags 4.01
295 TestNetworkPlugins/group/bridge/NetCatPod 41.63
297 TestStartStop/group/old-k8s-version/serial/FirstStart 217.65
298 TestNetworkPlugins/group/bridge/DNS 0.39
299 TestNetworkPlugins/group/bridge/Localhost 0.39
300 TestNetworkPlugins/group/bridge/HairPin 0.39
301 TestNetworkPlugins/group/kubenet/KubeletFlags 4.16
302 TestNetworkPlugins/group/kubenet/NetCatPod 16.62
303 TestNetworkPlugins/group/kubenet/DNS 0.44
304 TestNetworkPlugins/group/kubenet/Localhost 0.4
305 TestNetworkPlugins/group/kubenet/HairPin 0.42
307 TestStartStop/group/no-preload/serial/FirstStart 192.1
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 173.93
310 TestStartStop/group/old-k8s-version/serial/DeployApp 10.98
311 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 5.46
312 TestStartStop/group/old-k8s-version/serial/Stop 33.56
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 4
314 TestStartStop/group/old-k8s-version/serial/SecondStart 474.58
316 TestStartStop/group/newest-cni/serial/FirstStart 217.23
317 TestStartStop/group/no-preload/serial/DeployApp 10.86
318 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 4.33
319 TestStartStop/group/no-preload/serial/Stop 26.77
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 3.18
321 TestStartStop/group/no-preload/serial/SecondStart 458.5
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 19.34
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 4.47
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 25.31
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 2.8
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 688.17
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.23
329 TestStartStop/group/newest-cni/serial/Stop 26.44
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 2.44
331 TestStartStop/group/newest-cni/serial/SecondStart 106.43
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 4.19
335 TestStartStop/group/newest-cni/serial/Pause 29.85
337 TestStartStop/group/embed-certs/serial/FirstStart 143.31
338 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.04
339 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.42
340 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 3.98
341 TestStartStop/group/old-k8s-version/serial/Pause 27.74
342 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.42
344 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 3.79
345 TestStartStop/group/no-preload/serial/Pause 27.38
346 TestStartStop/group/embed-certs/serial/DeployApp 9.83
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.27
348 TestStartStop/group/embed-certs/serial/Stop 25.32
349 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 2.99
350 TestStartStop/group/embed-certs/serial/SecondStart 391.64
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 21.04
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.4
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 3.75
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 27.32
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.04
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.38
357 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 3.7
358 TestStartStop/group/embed-certs/serial/Pause 25.97
x
+
TestDownloadOnly/v1.16.0/json-events (11.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-051600 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-051600 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (11.7450172s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-051600
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-051600: exit status 85 (300.5879ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-051600 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:15 GMT |          |
	|         | -p download-only-051600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 03:15:28
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 03:15:28.607862    9240 out.go:296] Setting OutFile to fd 616 ...
	I0219 03:15:28.665839    9240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:15:28.665839    9240 out.go:309] Setting ErrFile to fd 620...
	I0219 03:15:28.665839    9240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0219 03:15:28.675547    9240 root.go:312] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0219 03:15:28.686799    9240 out.go:303] Setting JSON to true
	I0219 03:15:28.689546    9240 start.go:125] hostinfo: {"hostname":"minikube1","uptime":13518,"bootTime":1676763010,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 03:15:28.689546    9240 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 03:15:28.717313    9240 out.go:97] [download-only-051600] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	W0219 03:15:28.717778    9240 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0219 03:15:28.718025    9240 notify.go:220] Checking for updates...
	I0219 03:15:28.720659    9240 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 03:15:28.723075    9240 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 03:15:28.726703    9240 out.go:169] MINIKUBE_LOCATION=master
	I0219 03:15:28.729569    9240 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0219 03:15:28.734673    9240 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0219 03:15:28.735665    9240 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 03:15:30.607293    9240 out.go:97] Using the hyperv driver based on user configuration
	I0219 03:15:30.608264    9240 start.go:296] selected driver: hyperv
	I0219 03:15:30.608264    9240 start.go:857] validating driver "hyperv" against <nil>
	I0219 03:15:30.608264    9240 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0219 03:15:30.655813    9240 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0219 03:15:30.656400    9240 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0219 03:15:30.656943    9240 cni.go:84] Creating CNI manager for ""
	I0219 03:15:30.656943    9240 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0219 03:15:30.657025    9240 start_flags.go:319] config:
	{Name:download-only-051600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-051600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 03:15:30.657647    9240 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 03:15:30.661753    9240 out.go:97] Downloading VM boot image ...
	I0219 03:15:30.661830    9240 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.29.0-1676568791-15849-amd64.iso
	I0219 03:15:33.434396    9240 out.go:97] Starting control plane node download-only-051600 in cluster download-only-051600
	I0219 03:15:33.434396    9240 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0219 03:15:33.478467    9240 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0219 03:15:33.479052    9240 cache.go:57] Caching tarball of preloaded images
	I0219 03:15:33.479668    9240 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0219 03:15:33.483108    9240 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0219 03:15:33.483108    9240 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0219 03:15:33.556939    9240 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0219 03:15:37.671628    9240 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0219 03:15:37.673233    9240 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-051600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (8.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-051600 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-051600 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=hyperv: (8.4182938s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (8.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-051600
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-051600: exit status 85 (597.7672ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-051600 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:15 GMT |          |
	|         | -p download-only-051600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-051600 | minikube1\jenkins | v1.29.0 | 19 Feb 23 03:15 GMT |          |
	|         | -p download-only-051600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/19 03:15:40
	Running on machine: minikube1
	Binary: Built with gc go1.20 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0219 03:15:40.708742   10020 out.go:296] Setting OutFile to fd 668 ...
	I0219 03:15:40.762252   10020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:15:40.762252   10020 out.go:309] Setting ErrFile to fd 688...
	I0219 03:15:40.762252   10020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0219 03:15:40.774796   10020 root.go:312] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0219 03:15:40.783808   10020 out.go:303] Setting JSON to true
	I0219 03:15:40.786661   10020 start.go:125] hostinfo: {"hostname":"minikube1","uptime":13530,"bootTime":1676763010,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 03:15:40.787298   10020 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 03:15:40.791631   10020 out.go:97] [download-only-051600] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 03:15:40.791975   10020 notify.go:220] Checking for updates...
	I0219 03:15:40.794370   10020 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 03:15:40.797397   10020 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 03:15:40.804240   10020 out.go:169] MINIKUBE_LOCATION=master
	I0219 03:15:40.806861   10020 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0219 03:15:40.811687   10020 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0219 03:15:40.814656   10020 config.go:182] Loaded profile config "download-only-051600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0219 03:15:40.815267   10020 start.go:765] api.Load failed for download-only-051600: filestore "download-only-051600": Docker machine "download-only-051600" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0219 03:15:40.815267   10020 driver.go:365] Setting default libvirt URI to qemu:///system
	W0219 03:15:40.815267   10020 start.go:765] api.Load failed for download-only-051600: filestore "download-only-051600": Docker machine "download-only-051600" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0219 03:15:42.869630   10020 out.go:97] Using the hyperv driver based on existing profile
	I0219 03:15:42.870351   10020 start.go:296] selected driver: hyperv
	I0219 03:15:42.870351   10020 start.go:857] validating driver "hyperv" against &{Name:download-only-051600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-051600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 03:15:42.927155   10020 cni.go:84] Creating CNI manager for ""
	I0219 03:15:42.927155   10020 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0219 03:15:42.927155   10020 start_flags.go:319] config:
	{Name:download-only-051600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-051600 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 03:15:42.927425   10020 iso.go:125] acquiring lock: {Name:mk0a282de77c20a01e287b73437e6c43df35e4e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0219 03:15:42.930629   10020 out.go:97] Starting control plane node download-only-051600 in cluster download-only-051600
	I0219 03:15:42.930629   10020 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 03:15:42.965579   10020 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0219 03:15:42.965579   10020 cache.go:57] Caching tarball of preloaded images
	I0219 03:15:42.965579   10020 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0219 03:15:42.968752   10020 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0219 03:15:42.968752   10020 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0219 03:15:43.034208   10020 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-051600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.60s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.54s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.5448365s)
--- PASS: TestDownloadOnly/DeleteAll (1.54s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.45s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-051600
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-051600: (1.4502658s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.45s)

                                                
                                    
x
+
TestBinaryMirror (3.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-980700 --alsologtostderr --binary-mirror http://127.0.0.1:59983 --driver=hyperv
aaa_download_only_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-980700 --alsologtostderr --binary-mirror http://127.0.0.1:59983 --driver=hyperv: (2.4103729s)
helpers_test.go:175: Cleaning up "binary-mirror-980700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-980700
--- PASS: TestBinaryMirror (3.30s)

                                                
                                    
x
+
TestOffline (267.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-928900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-928900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m43.0400033s)
helpers_test.go:175: Cleaning up "offline-docker-928900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-928900
E0219 04:37:14.724778   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-928900: (44.1886529s)
--- PASS: TestOffline (267.23s)

                                                
                                    
x
+
TestAddons/Setup (284.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-153200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-153200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m44.5826633s)
--- PASS: TestAddons/Setup (284.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 28.17ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-v45rm" [13456f48-c0b0-4307-bce1-0a675570bbd2] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0323836s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5lfmx" [df0a5093-a237-4c3a-905b-984b25d22697] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0399483s
addons_test.go:305: (dbg) Run:  kubectl --context addons-153200 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-153200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-153200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.0710959s)
addons_test.go:324: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 ip
addons_test.go:324: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 ip: (1.0395038s)
2023/02/19 03:21:01 [DEBUG] GET http://172.28.250.125:5000
addons_test.go:353: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 addons disable registry --alsologtostderr -v=1
addons_test.go:353: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 addons disable registry --alsologtostderr -v=1: (3.6424112s)
--- PASS: TestAddons/parallel/Registry (23.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (38.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-153200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-153200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-153200 replace --force -f testdata\nginx-ingress-v1.yaml: (1.6900494s)
addons_test.go:210: (dbg) Run:  kubectl --context addons-153200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a9b620b2-656c-4b37-93e1-29f5e13f4d18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a9b620b2-656c-4b37-93e1-29f5e13f4d18] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.0283329s
addons_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.7472911s)
addons_test.go:251: (dbg) Run:  kubectl --context addons-153200 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 ip
addons_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 ip: (1.0282693s)
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 172.28.250.125
addons_test.go:271: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 addons disable ingress-dns --alsologtostderr -v=1: (4.0777104s)
addons_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 addons disable ingress --alsologtostderr -v=1: (10.7532134s)
--- PASS: TestAddons/parallel/Ingress (38.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (9.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 28.3353ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-2b6sz" [cc0b0552-a345-42e8-9441-bf5d20bdbe3e] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0323836s
addons_test.go:380: (dbg) Run:  kubectl --context addons-153200 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:397: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 addons disable metrics-server --alsologtostderr -v=1: (4.0740941s)
--- PASS: TestAddons/parallel/MetricsServer (9.40s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.96s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 4.6212ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-xk7rk" [57c5da5d-2ac4-41cc-b394-7457ff27499b] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.2128501s
addons_test.go:438: (dbg) Run:  kubectl --context addons-153200 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-153200 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.1685532s)
addons_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:455: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 addons disable helm-tiller --alsologtostderr -v=1: (3.5597717s)
--- PASS: TestAddons/parallel/HelmTiller (19.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 10.2961ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-153200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:529: (dbg) Done: kubectl --context addons-153200 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.0430786s)
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-153200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [975d6ce3-2699-4990-a6bd-af23ad968131] Pending
helpers_test.go:344: "task-pv-pod" [975d6ce3-2699-4990-a6bd-af23ad968131] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [975d6ce3-2699-4990-a6bd-af23ad968131] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.0356703s
addons_test.go:549: (dbg) Run:  kubectl --context addons-153200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-153200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-153200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-153200 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-153200 delete pod task-pv-pod: (1.9929494s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-153200 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-153200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-153200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-153200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [eec8b2ff-2c0a-4ea5-90f9-bb87692c2052] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [eec8b2ff-2c0a-4ea5-90f9-bb87692c2052] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0279289s
addons_test.go:591: (dbg) Run:  kubectl --context addons-153200 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-153200 delete pod task-pv-pod-restore: (1.7082497s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-153200 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-153200 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (9.6196017s)
addons_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-153200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p addons-153200 addons disable volumesnapshots --alsologtostderr -v=1: (3.6182843s)
--- PASS: TestAddons/parallel/CSI (61.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-153200 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-153200 --alsologtostderr -v=1: (4.5258824s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-ghwln" [1166b1a6-8550-4d0d-9d7c-9d69c12575dd] Pending
helpers_test.go:344: "headlamp-5759877c79-ghwln" [1166b1a6-8550-4d0d-9d7c-9d69c12575dd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-ghwln" [1166b1a6-8550-4d0d-9d7c-9d69c12575dd] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.0317893s
--- PASS: TestAddons/parallel/Headlamp (19.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-9r2hj" [c46e20f3-69bd-4ad8-a1d3-7ed7496119e2] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0281051s
addons_test.go:813: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-153200
addons_test.go:813: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-153200: (3.5705895s)
--- PASS: TestAddons/parallel/CloudSpanner (8.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-153200 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-153200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (27.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-153200
addons_test.go:147: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-153200: (24.7391176s)
addons_test.go:151: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-153200
addons_test.go:151: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-153200: (1.73758s)
addons_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-153200
addons_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-153200: (1.2980773s)
--- PASS: TestAddons/StoppedEnableDisable (27.78s)

                                                
                                    
x
+
TestCertOptions (270.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-187700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0219 04:47:14.729214   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-187700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (3m33.4922288s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-187700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-187700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (3.8267035s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-187700 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-187700 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-187700 -- "sudo cat /etc/kubernetes/admin.conf": (3.9673802s)
helpers_test.go:175: Cleaning up "cert-options-187700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-187700
E0219 04:50:41.832882   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-187700: (49.1704463s)
--- PASS: TestCertOptions (270.65s)

                                                
                                    
x
+
TestCertExpiration (623.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-011800 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-011800 --memory=2048 --cert-expiration=3m --driver=hyperv: (3m31.1106916s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-011800 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-011800 --memory=2048 --cert-expiration=8760h --driver=hyperv: (3m0.5331425s)
helpers_test.go:175: Cleaning up "cert-expiration-011800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-011800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-011800: (51.5677584s)
--- PASS: TestCertExpiration (623.23s)

                                                
                                    
x
+
TestDockerFlags (228.44s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-045000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-045000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (2m50.5251388s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-045000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-045000 ssh "sudo systemctl show docker --property=Environment --no-pager": (3.8079249s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-045000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-045000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (3.7696112s)
helpers_test.go:175: Cleaning up "docker-flags-045000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-045000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-045000: (50.3358108s)
--- PASS: TestDockerFlags (228.44s)

                                                
                                    
x
+
TestForceSystemdFlag (175.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-928900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-928900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (2m1.6121729s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-928900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-928900 ssh "docker info --format {{.CgroupDriver}}": (3.7124287s)
helpers_test.go:175: Cleaning up "force-systemd-flag-928900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-928900
E0219 04:35:41.833644   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-928900: (50.0912911s)
--- PASS: TestForceSystemdFlag (175.42s)

                                                
                                    
x
+
TestForceSystemdEnv (200.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-780200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-780200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (2m38.1861818s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-780200 ssh "docker info --format {{.CgroupDriver}}"
E0219 04:49:05.290108   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-780200 ssh "docker info --format {{.CgroupDriver}}": (3.6982037s)
helpers_test.go:175: Cleaning up "force-systemd-env-780200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-780200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-780200: (38.2800395s)
--- PASS: TestForceSystemdEnv (200.17s)

                                                
                                    
x
+
TestErrorSpam/setup (115.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-534200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-534200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 --driver=hyperv: (1m55.9176815s)
error_spam_test.go:91: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.1."
--- PASS: TestErrorSpam/setup (115.92s)

                                                
                                    
x
+
TestErrorSpam/start (5.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 start --dry-run: (2.0130515s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 start --dry-run: (1.9654753s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 start --dry-run: (1.9270567s)
--- PASS: TestErrorSpam/start (5.91s)

                                                
                                    
x
+
TestErrorSpam/status (14.53s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 status: (4.9334363s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 status: (4.7768788s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 status: (4.8159183s)
--- PASS: TestErrorSpam/status (14.53s)

                                                
                                    
x
+
TestErrorSpam/pause (9.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 pause
E0219 03:25:41.842684   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:41.857277   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:41.873346   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:41.904841   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:41.952516   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:42.047093   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:42.219741   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:42.554412   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:25:43.206704   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 pause: (3.3871199s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 pause
E0219 03:25:44.488994   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 pause: (3.1324463s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 pause
E0219 03:25:47.049447   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 pause: (3.1270633s)
--- PASS: TestErrorSpam/pause (9.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (9.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 unpause
E0219 03:25:52.183788   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 unpause: (3.3873485s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 unpause: (3.1773628s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 unpause: (3.1564437s)
--- PASS: TestErrorSpam/unpause (9.72s)

                                                
                                    
x
+
TestErrorSpam/stop (36.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 stop
E0219 03:26:02.434593   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:26:22.922022   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 stop: (23.6027016s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 stop: (7.238617s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-534200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-534200 stop: (6.00494s)
--- PASS: TestErrorSpam/stop (36.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\10148\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (132.26s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-068200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0219 03:27:03.886367   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:28:25.811683   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
functional_test.go:2161: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-068200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (2m12.254822s)
--- PASS: TestFunctional/serial/StartWithProxy (132.26s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (74.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-068200 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-068200 --alsologtostderr -v=8: (1m14.7585937s)
functional_test.go:656: soft start took 1m14.7607427s for "functional-068200" cluster.
--- PASS: TestFunctional/serial/SoftStart (74.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.17s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-068200 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (12.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 cache add k8s.gcr.io/pause:3.1: (4.228238s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 cache add k8s.gcr.io/pause:3.3: (4.2371885s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 cache add k8s.gcr.io/pause:latest: (4.3216528s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (12.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (6.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-068200 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1115744015\001
functional_test.go:1070: (dbg) Done: docker build -t minikube-local-cache-test:functional-068200 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1115744015\001: (1.6936166s)
functional_test.go:1082: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cache add minikube-local-cache-test:functional-068200
functional_test.go:1082: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 cache add minikube-local-cache-test:functional-068200: (3.8225212s)
functional_test.go:1087: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cache delete minikube-local-cache-test:functional-068200
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-068200
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (6.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh sudo crictl images
functional_test.go:1117: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh sudo crictl images: (3.5549877s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (14.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1140: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh sudo docker rmi k8s.gcr.io/pause:latest: (3.4965458s)
functional_test.go:1146: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-068200 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (3.4689902s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cache reload
E0219 03:30:41.854529   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
functional_test.go:1151: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 cache reload: (3.7723184s)
functional_test.go:1156: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1156: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (3.4506136s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (14.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 kubectl -- --context functional-068200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out\kubectl.exe --context functional-068200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (77.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-068200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0219 03:31:09.666452   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
functional_test.go:750: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-068200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m17.0865203s)
functional_test.go:754: restart took 1m17.086817s for "functional-068200" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (77.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-068200 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 logs
functional_test.go:1229: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 logs: (4.3359486s)
--- PASS: TestFunctional/serial/LogsCmd (4.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (5.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3838872425\001\logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3838872425\001\logs.txt: (5.0294287s)
--- PASS: TestFunctional/serial/LogsFileCmd (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-068200 config get cpus: exit status 14 (314.7114ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-068200 config get cpus: exit status 14 (259.1556ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-068200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:967: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-068200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (3.7910743s)

                                                
                                                
-- stdout --
	* [functional-068200] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0219 03:32:32.360248   10844 out.go:296] Setting OutFile to fd 708 ...
	I0219 03:32:32.444706   10844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:32:32.444805   10844 out.go:309] Setting ErrFile to fd 828...
	I0219 03:32:32.444805   10844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:32:32.468213   10844 out.go:303] Setting JSON to false
	I0219 03:32:32.474490   10844 start.go:125] hostinfo: {"hostname":"minikube1","uptime":14541,"bootTime":1676763010,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 03:32:32.474620   10844 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 03:32:32.660671   10844 out.go:177] * [functional-068200] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 03:32:32.717638   10844 notify.go:220] Checking for updates...
	I0219 03:32:32.853381   10844 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 03:32:33.056421   10844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 03:32:33.244086   10844 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 03:32:33.595807   10844 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 03:32:33.943887   10844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 03:32:34.042927   10844 config.go:182] Loaded profile config "functional-068200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 03:32:34.043798   10844 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 03:32:35.891165   10844 out.go:177] * Using the hyperv driver based on existing profile
	I0219 03:32:35.894537   10844 start.go:296] selected driver: hyperv
	I0219 03:32:35.894605   10844 start.go:857] validating driver "hyperv" against &{Name:functional-068200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.26.1 ClusterName:functional-068200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.28.246.195 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 03:32:35.894605   10844 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 03:32:35.944662   10844 out.go:177] 
	W0219 03:32:35.949745   10844 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0219 03:32:35.953130   10844 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-068200 --dry-run --alsologtostderr -v=1 --driver=hyperv
functional_test.go:984: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-068200 --dry-run --alsologtostderr -v=1 --driver=hyperv: (2.0365897s)
--- PASS: TestFunctional/parallel/DryRun (5.83s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-068200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-068200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (2.0770574s)

                                                
                                                
-- stdout --
	* [functional-068200] minikube v1.29.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote hyperv basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0219 03:32:30.261168    9400 out.go:296] Setting OutFile to fd 1020 ...
	I0219 03:32:30.339733    9400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:32:30.339733    9400 out.go:309] Setting ErrFile to fd 700...
	I0219 03:32:30.339915    9400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 03:32:30.363509    9400 out.go:303] Setting JSON to false
	I0219 03:32:30.371060    9400 start.go:125] hostinfo: {"hostname":"minikube1","uptime":14539,"bootTime":1676763010,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2604 Build 19045.2604","kernelVersion":"10.0.19045.2604 Build 19045.2604","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0219 03:32:30.371060    9400 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0219 03:32:30.375360    9400 out.go:177] * [functional-068200] minikube v1.29.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	I0219 03:32:30.380644    9400 notify.go:220] Checking for updates...
	I0219 03:32:30.382300    9400 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0219 03:32:30.386023    9400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0219 03:32:30.391605    9400 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0219 03:32:30.394242    9400 out.go:177]   - MINIKUBE_LOCATION=master
	I0219 03:32:30.396914    9400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0219 03:32:30.400283    9400 config.go:182] Loaded profile config "functional-068200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 03:32:30.401153    9400 driver.go:365] Setting default libvirt URI to qemu:///system
	I0219 03:32:32.078091    9400 out.go:177] * Utilisation du pilote hyperv basé sur le profil existant
	I0219 03:32:32.081121    9400 start.go:296] selected driver: hyperv
	I0219 03:32:32.081203    9400 start.go:857] validating driver "hyperv" against &{Name:functional-068200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.26.1 ClusterName:functional-068200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.28.246.195 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0219 03:32:32.081203    9400 start.go:868] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0219 03:32:32.137508    9400 out.go:177] 
	W0219 03:32:32.139666    9400 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0219 03:32:32.142246    9400 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 status
functional_test.go:847: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 status: (5.0900808s)
functional_test.go:853: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:853: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (5.0038479s)
functional_test.go:865: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 status -o json
functional_test.go:865: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 status -o json: (4.9470137s)
--- PASS: TestFunctional/parallel/StatusCmd (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (42.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-068200 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-068200 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-c4cc9" [743c4969-8581-46ee-b1a0-cca06ab80673] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-c4cc9" [743c4969-8581-46ee-b1a0-cca06ab80673] Running
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 16.0265491s
functional_test.go:1449: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 service list
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 service list: (5.0182444s)
functional_test.go:1463: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 service --namespace=default --https --url hello-node
functional_test.go:1463: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 service --namespace=default --https --url hello-node: (6.7205525s)
functional_test.go:1476: found endpoint: https://172.28.246.195:30631
functional_test.go:1491: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 service hello-node --url --format={{.IP}}
functional_test.go:1491: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 service hello-node --url --format={{.IP}}: (6.5330816s)
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 service hello-node --url
functional_test.go:1505: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 service hello-node --url: (7.2819024s)
functional_test.go:1511: found endpoint for hello-node: http://172.28.246.195:30631
--- PASS: TestFunctional/parallel/ServiceCmd (42.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-068200 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-068200 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-dv4m8" [eefbd96f-71df-4308-8d34-0c7020e4d608] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-dv4m8" [eefbd96f-71df-4308-8d34-0c7020e4d608] Running
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0454563s
functional_test.go:1579: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 service hello-node-connect --url
functional_test.go:1579: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 service hello-node-connect --url: (7.2449572s)
functional_test.go:1585: found endpoint for hello-node-connect: http://172.28.246.195:30805
functional_test.go:1605: http://172.28.246.195:30805: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-dv4m8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.28.246.195:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.28.246.195:30805
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [17a8964a-7cef-4be7-838e-016d533fcd05] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0584519s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-068200 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-068200 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-068200 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-068200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [838b1cf5-ba22-4dcf-91f6-168136c3666d] Pending
helpers_test.go:344: "sp-pod" [838b1cf5-ba22-4dcf-91f6-168136c3666d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [838b1cf5-ba22-4dcf-91f6-168136c3666d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.0235618s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-068200 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-068200 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-068200 delete -f testdata/storage-provisioner/pod.yaml: (1.0018487s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-068200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7884d455-3dd1-4440-ad83-ac57497e1ab4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7884d455-3dd1-4440-ad83-ac57497e1ab4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0278858s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-068200 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "echo hello"
functional_test.go:1655: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "echo hello": (3.588793s)
functional_test.go:1672: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "cat /etc/hostname"
functional_test.go:1672: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "cat /etc/hostname": (3.49924s)
--- PASS: TestFunctional/parallel/SSHCmd (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (15.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 cp testdata\cp-test.txt /home/docker/cp-test.txt: (3.5314956s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh -n functional-068200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh -n functional-068200 "sudo cat /home/docker/cp-test.txt": (3.9907522s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 cp functional-068200:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3433998642\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 cp functional-068200:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3433998642\001\cp-test.txt: (4.1478018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh -n functional-068200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh -n functional-068200 "sudo cat /home/docker/cp-test.txt": (3.7143888s)
--- PASS: TestFunctional/parallel/CpCmd (15.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (48.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-068200 replace --force -f testdata\mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-qtn4q" [a2d75219-6a22-4aab-a0dd-498dbed096d0] Pending
helpers_test.go:344: "mysql-888f84dd9-qtn4q" [a2d75219-6a22-4aab-a0dd-498dbed096d0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-qtn4q" [a2d75219-6a22-4aab-a0dd-498dbed096d0] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.0429721s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;": exit status 1 (386.5571ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;": exit status 1 (436.109ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;": exit status 1 (500.3356ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;": exit status 1 (483.1645ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;": exit status 1 (545.8115ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-068200 exec mysql-888f84dd9-qtn4q -- mysql -ppassword -e "show databases;"
E0219 03:35:41.850106   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (48.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10148/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/test/nested/copy/10148/hosts"
functional_test.go:1858: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/test/nested/copy/10148/hosts": (3.8039383s)
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (24.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10148.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/10148.pem"
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/10148.pem": (4.1642203s)
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10148.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /usr/share/ca-certificates/10148.pem"
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /usr/share/ca-certificates/10148.pem": (3.7784763s)
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/51391683.0": (4.5294584s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/101482.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/101482.pem"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/101482.pem": (4.0322199s)
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/101482.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /usr/share/ca-certificates/101482.pem"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /usr/share/ca-certificates/101482.pem": (3.8217008s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (3.711262s)
--- PASS: TestFunctional/parallel/CertSync (24.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-068200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-068200 ssh "sudo systemctl is-active crio": exit status 1 (4.0512869s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2215: (dbg) Done: out/minikube-windows-amd64.exe license: (2.5786332s)
--- PASS: TestFunctional/parallel/License (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.7641574s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (3.4890853s)
functional_test.go:1311: Took "3.489277s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "284.145ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (3.5172568s)
functional_test.go:1362: Took "3.5175096s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "282.5652ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 version --short
--- PASS: TestFunctional/parallel/Version/short (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 version -o=json --components
functional_test.go:2197: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 version -o=json --components: (4.1746495s)
--- PASS: TestFunctional/parallel/Version/components (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls --format short
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls --format short: (3.0774918s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-068200 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-068200
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-068200
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls --format table
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls --format table: (3.025392s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-068200 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-068200 | 7ee9ecc2187ab | 30B    |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| gcr.io/google-containers/addon-resizer      | functional-068200 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls --format json
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls --format json: (3.2414419s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-068200 image ls --format json:
[{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"7ee9ecc2187ab97738188b41fd7ca98bc3e6ab36ea6975f814378594d230c1a0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-068200"],"size":"30"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoD
igests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-068200"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:
3.1"],"size":"742000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls --format yaml
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls --format yaml: (3.1330952s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-068200 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 7ee9ecc2187ab97738188b41fd7ca98bc3e6ab36ea6975f814378594d230c1a0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-068200
size: "30"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-068200
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (13.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-068200 ssh pgrep buildkitd: exit status 1 (3.8741242s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image build -t localhost/my-image:functional-068200 testdata\build
functional_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image build -t localhost/my-image:functional-068200 testdata\build: (7.1837578s)
functional_test.go:316: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-068200 image build -t localhost/my-image:functional-068200 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 6c6a79d11e74
Removing intermediate container 6c6a79d11e74
---> 2c27245a2a6c
Step 3/3 : ADD content.txt /
---> 243a1cb8bf20
Successfully built 243a1cb8bf20
Successfully tagged localhost/my-image:functional-068200
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls: (2.8532092s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (13.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.8900391s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-068200
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-068200 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-068200 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6adae931-c943-4dd7-b6f5-6c9e72c41090] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6adae931-c943-4dd7-b6f5-6c9e72c41090] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0457637s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (12.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image load --daemon gcr.io/google-containers/addon-resizer:functional-068200
functional_test.go:351: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image load --daemon gcr.io/google-containers/addon-resizer:functional-068200: (9.2772408s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls: (2.9955281s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (12.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image load --daemon gcr.io/google-containers/addon-resizer:functional-068200
functional_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image load --daemon gcr.io/google-containers/addon-resizer:functional-068200: (7.3102576s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls: (3.0961689s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-068200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8552: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (17.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:492: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-068200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-068200"
functional_test.go:492: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-068200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-068200": (11.7798117s)
functional_test.go:515: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-068200 docker-env | Invoke-Expression ; docker images"
functional_test.go:515: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-068200 docker-env | Invoke-Expression ; docker images": (5.8378982s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (17.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.5509003s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-068200
functional_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image load --daemon gcr.io/google-containers/addon-resizer:functional-068200
functional_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image load --daemon gcr.io/google-containers/addon-resizer:functional-068200: (9.270662s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls: (3.1025918s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 update-context --alsologtostderr -v=2
functional_test.go:2046: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 update-context --alsologtostderr -v=2: (1.3348954s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 update-context --alsologtostderr -v=2
functional_test.go:2046: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 update-context --alsologtostderr -v=2: (1.0494302s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 update-context --alsologtostderr -v=2
functional_test.go:2046: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 update-context --alsologtostderr -v=2: (1.0548183s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image save gcr.io/google-containers/addon-resizer:functional-068200 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image save gcr.io/google-containers/addon-resizer:functional-068200 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: (5.7050308s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (6.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image rm gcr.io/google-containers/addon-resizer:functional-068200
functional_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image rm gcr.io/google-containers/addon-resizer:functional-068200: (3.8821806s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls: (3.05208s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (6.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: (5.9158881s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image ls: (2.9110786s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-068200
functional_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-068200 image save --daemon gcr.io/google-containers/addon-resizer:functional-068200
functional_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p functional-068200 image save --daemon gcr.io/google-containers/addon-resizer:functional-068200: (6.9481624s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-068200
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.46s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.65s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-068200
--- PASS: TestFunctional/delete_addon-resizer_images (0.65s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.22s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-068200
--- PASS: TestFunctional/delete_my-image_image (0.22s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.23s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-068200
--- PASS: TestFunctional/delete_minikube_cached_images (0.23s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-351900
image_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-351900: (5.1545527s)
--- PASS: TestImageBuild/serial/NormalBuild (5.15s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (6.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-351900
image_test.go:94: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-351900: (6.4129597s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (6.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (3.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-351900
image_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-351900: (3.56267s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (3.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (3.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-351900
E0219 03:40:41.844378   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
image_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-351900: (3.9059015s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (3.91s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (137.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-583800 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0219 03:42:05.036393   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:42:14.741362   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:14.756899   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:14.772662   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:14.804692   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:14.852320   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:14.947325   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:15.110267   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:15.441089   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:16.090321   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:17.385141   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:19.956006   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:25.078726   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:35.323447   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:42:55.816354   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:43:36.786058   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-583800 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (2m17.1461389s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (137.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons enable ingress --alsologtostderr -v=5: (25.2686998s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (3.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons enable ingress-dns --alsologtostderr -v=5: (3.2153194s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (3.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-583800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-583800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.3866319s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-583800 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-583800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b6bf2fc4-ba37-424e-8ff9-ed1239728818] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b6bf2fc4-ba37-424e-8ff9-ed1239728818] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 16.0685221s
addons_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.6610882s)
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-583800 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 ip
addons_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 ip: (1.047848s)
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 172.28.246.102
addons_test.go:271: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons disable ingress-dns --alsologtostderr -v=1: (10.2830322s)
addons_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-583800 addons disable ingress --alsologtostderr -v=1: (10.3268604s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.46s)

                                                
                                    
x
+
TestJSONOutput/start/Command (131.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-436100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0219 03:45:41.839916   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:47:14.731434   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:47:42.563785   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-436100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (2m11.6660091s)
--- PASS: TestJSONOutput/start/Command (131.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (3.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-436100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-436100 --output=json --user=testUser: (3.5927156s)
--- PASS: TestJSONOutput/pause/Command (3.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (3.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-436100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-436100 --output=json --user=testUser: (3.4689929s)
--- PASS: TestJSONOutput/unpause/Command (3.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (24.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-436100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-436100 --output=json --user=testUser: (24.3695688s)
--- PASS: TestJSONOutput/stop/Command (24.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.49s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-308200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-308200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (267.5294ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d9c67e59-d632-4429-a174-bb3251dcaa33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-308200] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d44bd7f-b812-4b60-a929-b165b4c38033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"0ceb5b7a-da34-4f76-a1ad-01dcf4f9f385","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3e9020d9-e2f5-4120-91ae-5602c9a4c956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"bd2922cd-31a6-460e-a3d2-bf6ae06fa0e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=master"}}
	{"specversion":"1.0","id":"3c96f245-d59a-4d0e-893c-6e09e990ff9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"479db407-d974-4b5c-a91a-937919421fd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-308200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-308200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-308200: (1.2191703s)
--- PASS: TestErrorJSONOutput (1.49s)

                                                
                                    
x
+
TestMainNoArgs (0.3s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.30s)

                                                
                                    
x
+
TestMinikubeProfile (320.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-794600 --driver=hyperv
E0219 03:49:05.295790   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:05.310773   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:05.326018   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:05.357423   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:05.405616   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:05.501088   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:05.675421   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:06.012511   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:06.656611   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:07.938505   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:10.501154   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:15.635414   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:25.888684   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:49:46.374677   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:50:27.335218   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-794600 --driver=hyperv: (1m58.8122705s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-794600 --driver=hyperv
E0219 03:50:41.841400   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:51:49.258966   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:52:14.739835   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-794600 --driver=hyperv: (2m1.3841725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-794600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (5.9826803s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-794600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (6.0194164s)
helpers_test.go:175: Cleaning up "second-794600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-794600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-794600: (31.3799908s)
helpers_test.go:175: Cleaning up "first-794600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-794600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-794600: (36.262619s)
--- PASS: TestMinikubeProfile (320.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (75.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-208200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0219 03:54:05.295488   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 03:54:33.101954   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-208200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (1m14.8639782s)
--- PASS: TestMountStart/serial/StartWithMountFirst (75.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (3.67s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-208200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-208200 ssh -- ls /minikube-host: (3.6726972s)
--- PASS: TestMountStart/serial/VerifyMountFirst (3.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (76.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-208200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0219 03:55:41.839920   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-208200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (1m15.4624065s)
--- PASS: TestMountStart/serial/StartWithMountSecond (76.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (3.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-208200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-208200 ssh -- ls /minikube-host: (3.6141495s)
--- PASS: TestMountStart/serial/VerifyMountSecond (3.61s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (12.41s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-208200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-208200 --alsologtostderr -v=5: (12.4048543s)
--- PASS: TestMountStart/serial/DeleteFirst (12.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (3.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-208200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-208200 ssh -- ls /minikube-host: (3.503052s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (3.50s)

                                                
                                    
x
+
TestMountStart/serial/Stop (10.81s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-208200
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-208200: (10.8079836s)
--- PASS: TestMountStart/serial/Stop (10.81s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (63.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-208200
E0219 03:57:14.738374   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-208200: (1m2.3429486s)
--- PASS: TestMountStart/serial/RestartStopped (63.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (3.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-208200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-208200 ssh -- ls /minikube-host: (3.7598468s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (3.76s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (261.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-657900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0219 03:58:37.936376   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 03:58:45.036516   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 03:59:05.296213   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 04:00:41.844707   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 04:02:14.735580   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-657900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (4m12.3432922s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr: (9.5762128s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (261.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- rollout status deployment/busybox: (3.6505258s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-brhr9 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-brhr9 -- nslookup kubernetes.io: (1.9046938s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-xg2wx -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-brhr9 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-xg2wx -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-brhr9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-657900 -- exec busybox-6b86dd6d48-xg2wx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (128.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-657900 -v 3 --alsologtostderr
E0219 04:04:05.288214   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 04:05:28.460538   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-657900 -v 3 --alsologtostderr: (1m54.7993943s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr
E0219 04:05:41.831176   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr: (13.7191778s)
--- PASS: TestMultiNode/serial/AddNode (128.52s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.085255s)
--- PASS: TestMultiNode/serial/ProfileList (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (137.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 status --output json --alsologtostderr: (13.5112354s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp testdata\cp-test.txt multinode-657900:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp testdata\cp-test.txt multinode-657900:/home/docker/cp-test.txt: (3.5793266s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt": (3.5763323s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900.txt: (3.5356643s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt": (3.6043075s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900:/home/docker/cp-test.txt multinode-657900-m02:/home/docker/cp-test_multinode-657900_multinode-657900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900:/home/docker/cp-test.txt multinode-657900-m02:/home/docker/cp-test_multinode-657900_multinode-657900-m02.txt: (6.1766753s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt": (3.5372486s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test_multinode-657900_multinode-657900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test_multinode-657900_multinode-657900-m02.txt": (3.5372098s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900:/home/docker/cp-test.txt multinode-657900-m03:/home/docker/cp-test_multinode-657900_multinode-657900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900:/home/docker/cp-test.txt multinode-657900-m03:/home/docker/cp-test_multinode-657900_multinode-657900-m03.txt: (6.2061256s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test.txt": (3.5562303s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test_multinode-657900_multinode-657900-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test_multinode-657900_multinode-657900-m03.txt": (3.5079087s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp testdata\cp-test.txt multinode-657900-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp testdata\cp-test.txt multinode-657900-m02:/home/docker/cp-test.txt: (3.566682s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt": (3.5449845s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900-m02.txt: (3.5333858s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt": (3.54535s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt multinode-657900:/home/docker/cp-test_multinode-657900-m02_multinode-657900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt multinode-657900:/home/docker/cp-test_multinode-657900-m02_multinode-657900.txt: (6.3520131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt": (3.6609411s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test_multinode-657900-m02_multinode-657900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test_multinode-657900-m02_multinode-657900.txt": (3.6448509s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt multinode-657900-m03:/home/docker/cp-test_multinode-657900-m02_multinode-657900-m03.txt
E0219 04:07:14.724561   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m02:/home/docker/cp-test.txt multinode-657900-m03:/home/docker/cp-test_multinode-657900-m02_multinode-657900-m03.txt: (6.310067s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test.txt": (3.5653396s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test_multinode-657900-m02_multinode-657900-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test_multinode-657900-m02_multinode-657900-m03.txt": (3.6445059s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp testdata\cp-test.txt multinode-657900-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp testdata\cp-test.txt multinode-657900-m03:/home/docker/cp-test.txt: (3.6787134s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt": (3.6507505s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile3876665541\001\cp-test_multinode-657900-m03.txt: (3.5113902s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt": (3.67659s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt multinode-657900:/home/docker/cp-test_multinode-657900-m03_multinode-657900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt multinode-657900:/home/docker/cp-test_multinode-657900-m03_multinode-657900.txt: (6.2549675s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt": (3.7151499s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test_multinode-657900-m03_multinode-657900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900 "sudo cat /home/docker/cp-test_multinode-657900-m03_multinode-657900.txt": (3.5307923s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt multinode-657900-m02:/home/docker/cp-test_multinode-657900-m03_multinode-657900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 cp multinode-657900-m03:/home/docker/cp-test.txt multinode-657900-m02:/home/docker/cp-test_multinode-657900-m03_multinode-657900-m02.txt: (6.2070176s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m03 "sudo cat /home/docker/cp-test.txt": (3.5948299s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test_multinode-657900-m03_multinode-657900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 ssh -n multinode-657900-m02 "sudo cat /home/docker/cp-test_multinode-657900-m03_multinode-657900-m02.txt": (3.5824178s)
--- PASS: TestMultiNode/serial/CopyFile (137.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (31.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 node stop m03: (11.0132544s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-657900 status: exit status 7 (10.0546884s)

                                                
                                                
-- stdout --
	multinode-657900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-657900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-657900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr: exit status 7 (10.0448721s)

                                                
                                                
-- stdout --
	multinode-657900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-657900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-657900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0219 04:08:29.296403   10576 out.go:296] Setting OutFile to fd 676 ...
	I0219 04:08:29.355412   10576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:08:29.355412   10576 out.go:309] Setting ErrFile to fd 924...
	I0219 04:08:29.355412   10576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:08:29.367402   10576 out.go:303] Setting JSON to false
	I0219 04:08:29.367402   10576 mustload.go:65] Loading cluster: multinode-657900
	I0219 04:08:29.367402   10576 notify.go:220] Checking for updates...
	I0219 04:08:29.368401   10576 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:08:29.368401   10576 status.go:255] checking status of multinode-657900 ...
	I0219 04:08:29.393906   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:08:30.137966   10576 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:08:30.137966   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:30.138034   10576 status.go:330] multinode-657900 host status = "Running" (err=<nil>)
	I0219 04:08:30.138034   10576 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:08:30.138765   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:08:30.869794   10576 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:08:30.869891   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:30.869939   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:08:31.955262   10576 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 04:08:31.955262   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:31.955262   10576 host.go:66] Checking if "multinode-657900" exists ...
	I0219 04:08:31.965440   10576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0219 04:08:31.965440   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:08:32.735959   10576 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:08:32.736045   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:32.736113   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900 ).networkadapters[0]).ipaddresses[0]
	I0219 04:08:33.786512   10576 main.go:141] libmachine: [stdout =====>] : 172.28.246.233
	
	I0219 04:08:33.786512   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:33.786512   10576 sshutil.go:53] new ssh client: &{IP:172.28.246.233 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900\id_rsa Username:docker}
	I0219 04:08:33.889024   10576 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.923591s)
	I0219 04:08:33.900170   10576 ssh_runner.go:195] Run: systemctl --version
	I0219 04:08:33.916536   10576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:08:33.939224   10576 kubeconfig.go:92] found "multinode-657900" server: "https://172.28.246.233:8443"
	I0219 04:08:33.939327   10576 api_server.go:165] Checking apiserver status ...
	I0219 04:08:33.948921   10576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0219 04:08:33.981737   10576 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup
	I0219 04:08:33.996297   10576 api_server.go:181] apiserver freezer: "6:freezer:/kubepods/burstable/pod1ff63a085e26860683ab640202bbdd7b/55e12988bbaef91e3bb8f58978f5b67f3fb80fb1402860bed3edfa46fc05b6d1"
	I0219 04:08:34.006011   10576 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1ff63a085e26860683ab640202bbdd7b/55e12988bbaef91e3bb8f58978f5b67f3fb80fb1402860bed3edfa46fc05b6d1/freezer.state
	I0219 04:08:34.020826   10576 api_server.go:203] freezer state: "THAWED"
	I0219 04:08:34.020826   10576 api_server.go:252] Checking apiserver healthz at https://172.28.246.233:8443/healthz ...
	I0219 04:08:34.031005   10576 api_server.go:278] https://172.28.246.233:8443/healthz returned 200:
	ok
	I0219 04:08:34.031064   10576 status.go:421] multinode-657900 apiserver status = Running (err=<nil>)
	I0219 04:08:34.031097   10576 status.go:257] multinode-657900 status: &{Name:multinode-657900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0219 04:08:34.031097   10576 status.go:255] checking status of multinode-657900-m02 ...
	I0219 04:08:34.031097   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:08:34.777068   10576 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:08:34.777360   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:34.777419   10576 status.go:330] multinode-657900-m02 host status = "Running" (err=<nil>)
	I0219 04:08:34.777419   10576 host.go:66] Checking if "multinode-657900-m02" exists ...
	I0219 04:08:34.778453   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:08:35.506308   10576 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:08:35.506308   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:35.506401   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:08:36.517595   10576 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:08:36.517799   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:36.517799   10576 host.go:66] Checking if "multinode-657900-m02" exists ...
	I0219 04:08:36.527287   10576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0219 04:08:36.527287   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:08:37.249404   10576 main.go:141] libmachine: [stdout =====>] : Running
	
	I0219 04:08:37.249476   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:37.249476   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-657900-m02 ).networkadapters[0]).ipaddresses[0]
	I0219 04:08:38.296525   10576 main.go:141] libmachine: [stdout =====>] : 172.28.248.228
	
	I0219 04:08:38.296705   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:38.297124   10576 sshutil.go:53] new ssh client: &{IP:172.28.248.228 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-657900-m02\id_rsa Username:docker}
	I0219 04:08:38.400147   10576 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.8728667s)
	I0219 04:08:38.410518   10576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0219 04:08:38.435064   10576 status.go:257] multinode-657900-m02 status: &{Name:multinode-657900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0219 04:08:38.435064   10576 status.go:255] checking status of multinode-657900-m03 ...
	I0219 04:08:38.435939   10576 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m03 ).state
	I0219 04:08:39.150739   10576 main.go:141] libmachine: [stdout =====>] : Off
	
	I0219 04:08:39.150958   10576 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:08:39.150958   10576 status.go:330] multinode-657900-m03 host status = "Stopped" (err=<nil>)
	I0219 04:08:39.151024   10576 status.go:343] host is not running, skipping remaining checks
	I0219 04:08:39.151024   10576 status.go:257] multinode-657900-m03 status: &{Name:multinode-657900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (31.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (91.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 node start m03 --alsologtostderr
E0219 04:09:05.289708   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 node start m03 --alsologtostderr: (1m17.9594115s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 status: (13.4574583s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (91.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (36.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 node delete m03: (26.6705445s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr: (9.2502165s)
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (36.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (46.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 stop
E0219 04:17:14.733390   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 stop: (43.6891939s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-657900 status: exit status 7 (1.6285253s)

                                                
                                                
-- stdout --
	multinode-657900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-657900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr: exit status 7 (1.5963701s)

                                                
                                                
-- stdout --
	multinode-657900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-657900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0219 04:17:21.551924    1604 out.go:296] Setting OutFile to fd 896 ...
	I0219 04:17:21.609108    1604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:17:21.609108    1604 out.go:309] Setting ErrFile to fd 676...
	I0219 04:17:21.609108    1604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0219 04:17:21.621373    1604 out.go:303] Setting JSON to false
	I0219 04:17:21.621373    1604 mustload.go:65] Loading cluster: multinode-657900
	I0219 04:17:21.621373    1604 notify.go:220] Checking for updates...
	I0219 04:17:21.621886    1604 config.go:182] Loaded profile config "multinode-657900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0219 04:17:21.621886    1604 status.go:255] checking status of multinode-657900 ...
	I0219 04:17:21.622684    1604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900 ).state
	I0219 04:17:22.287372    1604 main.go:141] libmachine: [stdout =====>] : Off
	
	I0219 04:17:22.287372    1604 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:17:22.287479    1604 status.go:330] multinode-657900 host status = "Stopped" (err=<nil>)
	I0219 04:17:22.287479    1604 status.go:343] host is not running, skipping remaining checks
	I0219 04:17:22.287583    1604 status.go:257] multinode-657900 status: &{Name:multinode-657900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0219 04:17:22.287711    1604 status.go:255] checking status of multinode-657900-m02 ...
	I0219 04:17:22.288356    1604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-657900-m02 ).state
	I0219 04:17:22.969418    1604 main.go:141] libmachine: [stdout =====>] : Off
	
	I0219 04:17:22.969418    1604 main.go:141] libmachine: [stderr =====>] : 
	I0219 04:17:22.969418    1604 status.go:330] multinode-657900-m02 host status = "Stopped" (err=<nil>)
	I0219 04:17:22.969418    1604 status.go:343] host is not running, skipping remaining checks
	I0219 04:17:22.969418    1604 status.go:257] multinode-657900-m02 status: &{Name:multinode-657900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (46.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (191.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-657900 --wait=true -v=8 --alsologtostderr --driver=hyperv
E0219 04:19:05.292412   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-657900 --wait=true -v=8 --alsologtostderr --driver=hyperv: (3m1.4688972s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-657900 status --alsologtostderr: (9.1652202s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (191.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (150.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-657900
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-657900-m02 --driver=hyperv
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-657900-m02 --driver=hyperv: exit status 14 (280.8309ms)

                                                
                                                
-- stdout --
	* [multinode-657900-m02] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-657900-m02' is duplicated with machine name 'multinode-657900-m02' in profile 'multinode-657900'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-657900-m03 --driver=hyperv
E0219 04:20:41.833205   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 04:22:08.460967   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 04:22:14.725490   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-657900-m03 --driver=hyperv: (2m0.3577072s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-657900
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-657900: exit status 80 (3.247947s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-657900
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-657900-m03 already exists in multinode-657900-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_node_17615de98fc431ce4460405c35b285c54151ae7f_4.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-657900-m03
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-657900-m03: (26.8553165s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (150.98s)

                                                
                                    
x
+
TestPreload (317.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-122700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0219 04:25:41.828169   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-122700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (2m27.1134085s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-122700 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-122700 -- docker pull gcr.io/k8s-minikube/busybox: (4.6191956s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-122700
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-122700: (23.4769068s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-122700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0219 04:27:14.727384   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-122700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (1m53.0308656s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-122700 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-122700 -- docker images: (3.7718787s)
helpers_test.go:175: Cleaning up "test-preload-122700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-122700
E0219 04:29:05.282930   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-122700: (25.7220862s)
--- PASS: TestPreload (317.74s)

                                                
                                    
x
+
TestScheduledStopWindows (220.45s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-406600 --memory=2048 --driver=hyperv
E0219 04:30:41.837386   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-406600 --memory=2048 --driver=hyperv: (2m0.1240487s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-406600 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-406600 --schedule 5m: (4.6927471s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-406600 -n scheduled-stop-406600
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-406600 -n scheduled-stop-406600: (5.1088517s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-406600 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-406600 -- sudo systemctl show minikube-scheduled-stop --no-page: (3.8204449s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-406600 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-406600 --schedule 5s: (4.8046836s)
E0219 04:31:57.940540   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 04:32:05.051368   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 04:32:14.731582   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-406600
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-406600: exit status 7 (951.7598ms)

                                                
                                                
-- stdout --
	scheduled-stop-406600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-406600 -n scheduled-stop-406600
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-406600 -n scheduled-stop-406600: exit status 7 (978.7152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-406600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-406600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-406600: (19.9627414s)
--- PASS: TestScheduledStopWindows (220.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (803.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
E0219 04:34:05.295993   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (6m21.2849842s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-803700
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-803700: (35.9070418s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-803700 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-803700 status --format={{.Host}}: exit status 7 (961.0004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperv
E0219 04:40:41.830959   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:251: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperv: (3m18.6791246s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-803700 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (329.4743ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-803700] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-803700
	    minikube start -p kubernetes-upgrade-803700 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8037002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-803700 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:283: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-803700 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperv: (2m34.1746578s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-803700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-803700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-803700: (31.9305355s)
--- PASS: TestKubernetesUpgrade (803.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-928900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-928900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (380.2834ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-928900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2604 Build 19045.2604
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=master
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestPause/serial/Start (154.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-061400 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-061400 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (2m34.6345078s)
--- PASS: TestPause/serial/Start (154.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (5.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-608000
version_upgrade_test.go:214: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-608000: (5.5886163s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (5.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (208.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv
E0219 04:48:37.939492   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 04:48:45.051757   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv: (3m28.9202768s)
--- PASS: TestNetworkPlugins/group/auto/Start (208.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (240.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv: (4m0.8452039s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (240.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (291.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv: (4m51.3775976s)
--- PASS: TestNetworkPlugins/group/calico/Start (291.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-843300 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-843300 "pgrep -a kubelet": (4.0831067s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-ll544" [466103c5-53f9-4b4f-99e2-63075e98e398] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0219 04:52:14.723608   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-ll544" [466103c5-53f9-4b4f-99e2-63075e98e398] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.2491717s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (225.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv: (3m45.7510819s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (225.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-g7r4z" [3e64eb15-c974-4970-9296-2ee3691838f7] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0355258s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-843300 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-843300 "pgrep -a kubelet": (4.1875736s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (4.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (23.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:148: (dbg) Done: kubectl --context kindnet-843300 replace --force -f testdata\netcat-deployment.yaml: (2.8969125s)
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-grjfp" [c3b2322d-2fa4-4b6c-9c57-969535e9eeb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0219 04:54:05.278819   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-grjfp" [c3b2322d-2fa4-4b6c-9c57-969535e9eeb6] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.0218325s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (23.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (165.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p false-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv: (2m45.9888075s)
--- PASS: TestNetworkPlugins/group/false/Start (165.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k8cf8" [baf2d66f-1fa2-46a6-856b-67a6c5dc6c9a] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.0341115s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (4.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-843300 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-843300 "pgrep -a kubelet": (4.9161703s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (4.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (17.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-mbsnz" [39b19779-1433-4a0b-bccf-2e7fbae9c31d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-mbsnz" [39b19779-1433-4a0b-bccf-2e7fbae9c31d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 17.0511365s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (17.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-843300 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-843300 "pgrep -a kubelet": (4.2518585s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (4.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-v8j7h" [b65167a9-d93f-4707-95dd-6d315cf21c4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-v8j7h" [b65167a9-d93f-4707-95dd-6d315cf21c4a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.0309622s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (191.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv
E0219 04:57:46.984543   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-843300\client.crt: The system cannot find the path specified.
E0219 04:58:27.952055   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-843300\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv: (3m11.3598679s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (191.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (4.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-843300 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-843300 "pgrep -a kubelet": (4.3582413s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (4.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (17.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-b4fhv" [53de96ef-bb5f-49dd-8658-81922b3f3c35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0219 04:58:47.109313   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:47.124488   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:47.140025   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:47.171464   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:47.219664   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:47.315134   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:47.489084   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:47.818981   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:48.469373   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-b4fhv" [53de96ef-bb5f-49dd-8658-81922b3f3c35] Running
E0219 04:58:49.755734   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 04:58:52.325567   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 17.026097s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (17.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (165.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv
E0219 05:00:09.153646   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv: (2m45.5382187s)
--- PASS: TestNetworkPlugins/group/flannel/Start (165.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (218.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv
E0219 05:00:41.829386   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv: (3m38.9779691s)
--- PASS: TestNetworkPlugins/group/bridge/Start (218.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-843300 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-843300 "pgrep -a kubelet": (4.0177002s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-9k5gb" [c865a6a3-33b5-4532-85fc-ae83fb5c151f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0219 05:01:10.003022   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:10.017975   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:10.033393   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:10.064503   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:10.111251   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:10.205492   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-9k5gb" [c865a6a3-33b5-4532-85fc-ae83fb5c151f] Running
E0219 05:01:10.380027   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:10.713674   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:11.360904   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:12.642560   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:01:15.210601   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.026545s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (211.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv
E0219 05:02:14.713251   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 05:02:21.031471   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-843300\client.crt: The system cannot find the path specified.
E0219 05:02:32.035943   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
E0219 05:02:33.730624   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-843300\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-843300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv: (3m31.3676704s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (211.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tgq7c" [f77dbfa2-9d1c-42a8-af4b-c35272b36368] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.2131945s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (6.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-843300 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-843300 "pgrep -a kubelet": (6.1853346s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (6.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (20.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-843300 replace --force -f testdata\netcat-deployment.yaml
E0219 05:03:02.003158   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-843300\client.crt: The system cannot find the path specified.
net_test.go:148: (dbg) Done: kubectl --context flannel-843300 replace --force -f testdata\netcat-deployment.yaml: (2.408249s)
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-pmzzl" [0ff61b2b-7fd3-44ba-b301-8e89bf61db71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-pmzzl" [0ff61b2b-7fd3-44ba-b301-8e89bf61db71] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 17.028403s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (20.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-843300 "pgrep -a kubelet"
E0219 05:04:05.277179   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-843300 "pgrep -a kubelet": (4.0109296s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (4.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (41.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5pfps" [d310cf85-fab5-4ae7-9547-b85c92415d12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0219 05:04:14.928875   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 05:04:19.566570   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-843300\client.crt: The system cannot find the path specified.
E0219 05:04:23.937898   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-843300\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-5pfps" [d310cf85-fab5-4ae7-9547-b85c92415d12] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 41.0122087s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (41.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (217.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-259500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-259500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0: (3m37.654398s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (217.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-843300 "pgrep -a kubelet"
E0219 05:05:41.833162   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-843300 "pgrep -a kubelet": (4.1615826s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (4.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-843300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7wtch" [1efc5ff0-2094-42d0-957c-ad3d1f080144] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-7wtch" [1efc5ff0-2094-42d0-957c-ad3d1f080144] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 16.0264154s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-843300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-843300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0219 05:06:00.177276   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-843300\client.crt: The system cannot find the path specified.
E0219 05:06:00.183266   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-843300\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (192.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-833400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.26.1
E0219 05:07:05.905542   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-843300\client.crt: The system cannot find the path specified.
E0219 05:07:07.781526   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-843300\client.crt: The system cannot find the path specified.
E0219 05:07:14.717544   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-833400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.26.1: (3m12.1003787s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (192.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (173.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-616000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.26.1
E0219 05:08:11.445246   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\flannel-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-616000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.26.1: (2m53.9256331s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (173.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-259500 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [145a9199-c87a-4427-8127-23f1f2365562] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [145a9199-c87a-4427-8127-23f1f2365562] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0445181s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-259500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (5.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-259500 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-259500 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.7672661s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-259500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (5.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (33.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-259500 --alsologtostderr -v=3
E0219 05:08:37.398492   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-843300\client.crt: The system cannot find the path specified.
E0219 05:08:44.379947   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-843300\client.crt: The system cannot find the path specified.
E0219 05:08:47.109130   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 05:09:05.281602   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 05:09:06.315126   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-843300\client.crt: The system cannot find the path specified.
E0219 05:09:07.485347   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:07.501140   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:07.516156   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:07.548096   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:07.595967   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:07.690645   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:07.853718   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-259500 --alsologtostderr -v=3: (33.5575578s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (33.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-259500 -n old-k8s-version-259500
E0219 05:09:08.175041   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:08.827289   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-259500 -n old-k8s-version-259500: exit status 7 (1.0600812s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-259500 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0219 05:09:10.112025   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-259500 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9383244s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (474.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-259500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0
E0219 05:09:12.680957   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:12.902139   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\flannel-843300\client.crt: The system cannot find the path specified.
E0219 05:09:17.806175   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-259500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0: (7m49.4819311s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-259500 -n old-k8s-version-259500
E0219 05:17:05.907903   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-259500 -n old-k8s-version-259500: (5.0997594s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (474.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (217.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-934800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.26.1
E0219 05:09:28.056357   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:09:48.547129   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-934800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.26.1: (3m37.2285737s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (217.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-833400 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b612993a-c56d-44c8-9445-7c6eebee7ef5] Pending
helpers_test.go:344: "busybox" [b612993a-c56d-44c8-9445-7c6eebee7ef5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b612993a-c56d-44c8-9445-7c6eebee7ef5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0291198s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-833400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-833400 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-833400 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.9474634s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-833400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (26.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-833400 --alsologtostderr -v=3
E0219 05:10:29.514869   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:10:34.834820   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\flannel-843300\client.crt: The system cannot find the path specified.
E0219 05:10:41.830311   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 05:10:42.867715   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:42.882842   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:42.898506   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:42.930806   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:42.978348   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:43.071033   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:43.242134   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:43.569644   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:44.215124   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:45.502008   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:10:48.072157   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-833400 --alsologtostderr -v=3: (26.771621s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (26.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-833400 -n no-preload-833400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-833400 -n no-preload-833400: exit status 7 (1.1972176s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-833400 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-833400 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.9840294s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (3.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (458.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-833400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.26.1
E0219 05:10:53.194396   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-833400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.26.1: (7m33.2701279s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-833400 -n no-preload-833400
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-833400 -n no-preload-833400: (5.2250396s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (458.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (19.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-616000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dc9e52f5-ea1e-4aeb-8b3a-a9c476b75dad] Pending
helpers_test.go:344: "busybox" [dc9e52f5-ea1e-4aeb-8b3a-a9c476b75dad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0219 05:11:00.182282   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-843300\client.crt: The system cannot find the path specified.
E0219 05:11:03.435413   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [dc9e52f5-ea1e-4aeb-8b3a-a9c476b75dad] Running
E0219 05:11:09.999253   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 18.3002604s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-616000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (19.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-616000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-616000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.1123988s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-616000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (25.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-616000 --alsologtostderr -v=3
E0219 05:11:23.927036   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:11:28.228907   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-843300\client.crt: The system cannot find the path specified.
E0219 05:11:39.543957   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-616000 --alsologtostderr -v=3: (25.3135479s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (25.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000: exit status 7 (1.0119835s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-616000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-616000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.7914878s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (688.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-616000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.26.1
E0219 05:11:51.447296   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:12:04.892220   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:12:05.905649   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-843300\client.crt: The system cannot find the path specified.
E0219 05:12:08.458341   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 05:12:14.724895   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
E0219 05:12:50.367301   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\flannel-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-616000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.26.1: (11m23.0788058s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000: (5.0928564s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (688.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-934800 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-934800 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.2277898s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (26.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-934800 --alsologtostderr -v=3
E0219 05:13:18.676685   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\flannel-843300\client.crt: The system cannot find the path specified.
E0219 05:13:26.825787   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
E0219 05:13:29.090176   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-934800 --alsologtostderr -v=3: (26.4436881s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (26.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-934800 -n newest-cni-934800
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-934800 -n newest-cni-934800: exit status 7 (1.070397s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-934800 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0219 05:13:37.398517   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-934800 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.3741435s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (106.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-934800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.26.1
E0219 05:13:47.111968   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
E0219 05:14:05.286126   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 05:14:07.488320   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:14:35.299160   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
E0219 05:15:10.299698   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-934800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.26.1: (1m41.1437898s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-934800 -n newest-cni-934800
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-934800 -n newest-cni-934800: (5.2855708s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (106.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-934800 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-934800 "sudo crictl images -o json": (4.190264s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (29.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-934800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-934800 --alsologtostderr -v=1: (4.233179s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-934800 -n newest-cni-934800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-934800 -n newest-cni-934800: exit status 2 (5.1660483s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-934800 -n newest-cni-934800
E0219 05:15:41.822168   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-153200\client.crt: The system cannot find the path specified.
E0219 05:15:42.863741   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubenet-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-934800 -n newest-cni-934800: exit status 2 (5.0493376s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-934800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-934800 --alsologtostderr -v=1: (4.495906s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-934800 -n newest-cni-934800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-934800 -n newest-cni-934800: (5.3900051s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-934800 -n newest-cni-934800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-934800 -n newest-cni-934800: (5.519682s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (29.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (143.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-038800 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.26.1
E0219 05:16:39.543138   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-038800 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.26.1: (2m23.3101898s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (143.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-78g26" [5cb210dc-86bc-48d4-8469-7ff3e29eb638] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0396089s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-78g26" [5cb210dc-86bc-48d4-8469-7ff3e29eb638] Running
E0219 05:17:14.717784   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-068200\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0145914s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-259500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-259500 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-259500 "sudo crictl images -o json": (3.9810101s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (27.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-259500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-259500 --alsologtostderr -v=1: (3.8509591s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-259500 -n old-k8s-version-259500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-259500 -n old-k8s-version-259500: exit status 2 (5.1251477s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-259500 -n old-k8s-version-259500
E0219 05:17:33.158318   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-259500 -n old-k8s-version-259500: exit status 2 (4.9940973s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-259500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-259500 --alsologtostderr -v=1: (3.7472295s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-259500 -n old-k8s-version-259500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-259500 -n old-k8s-version-259500: (5.0475894s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-259500 -n old-k8s-version-259500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-259500 -n old-k8s-version-259500: (4.9727666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (27.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jtxtf" [4d1446ff-90b6-4d4c-8518-a7cd9c40b60f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0329052s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jtxtf" [4d1446ff-90b6-4d4c-8518-a7cd9c40b60f] Running
E0219 05:18:37.383231   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0283086s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-833400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-833400 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-833400 "sudo crictl images -o json": (3.7920928s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (3.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (27.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-833400 --alsologtostderr -v=1
E0219 05:18:47.104206   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-833400 --alsologtostderr -v=1: (3.6994929s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-833400 -n no-preload-833400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-833400 -n no-preload-833400: exit status 2 (5.0210415s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-833400 -n no-preload-833400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-833400 -n no-preload-833400: exit status 2 (4.9274939s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-833400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-833400 --alsologtostderr -v=1: (3.7276639s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-833400 -n no-preload-833400
E0219 05:19:05.270422   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-833400 -n no-preload-833400: (4.917369s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-833400 -n no-preload-833400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-833400 -n no-preload-833400: (5.0836563s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (27.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-038800 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc805cbd-cdb5-40ae-abc7-b15064dbf8bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc805cbd-cdb5-40ae-abc7-b15064dbf8bd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0428835s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-038800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-038800 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0219 05:19:07.481103   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-038800 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.9354897s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-038800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (25.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-038800 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-038800 --alsologtostderr -v=3: (25.3232426s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (25.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-038800 -n embed-certs-038800
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-038800 -n embed-certs-038800: exit status 7 (1.1000541s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-038800 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-038800 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.8843488s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (391.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-038800 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-038800 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.26.1: (6m26.8209564s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-038800 -n embed-certs-038800
E0219 05:26:07.639060   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-616000\client.crt: The system cannot find the path specified.
E0219 05:26:10.001051   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-038800 -n embed-certs-038800: (4.822784s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (391.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dk76m" [2db6bbe0-1237-476f-9160-16884cd8f4ee] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0219 05:23:18.433161   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:18.448942   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:18.464449   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:18.495681   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:18.542692   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:18.637525   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:18.810632   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:19.140937   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:19.794705   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:21.078716   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:23.641522   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
E0219 05:23:28.776389   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dk76m" [2db6bbe0-1237-476f-9160-16884cd8f4ee] Running
E0219 05:23:37.392738   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.0331353s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dk76m" [2db6bbe0-1237-476f-9160-16884cd8f4ee] Running
E0219 05:23:39.025568   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0199032s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-616000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-616000 "sudo crictl images -o json": (3.7546064s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (3.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (27.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-616000 --alsologtostderr -v=1
E0219 05:23:47.106147   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-616000 --alsologtostderr -v=1: (3.6892085s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000: exit status 2 (5.0016122s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000
E0219 05:23:59.513722   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-259500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000: exit status 2 (4.8484794s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-616000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-616000 --alsologtostderr -v=1: (3.6753784s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000
E0219 05:24:05.282011   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-583800\client.crt: The system cannot find the path specified.
E0219 05:24:07.487266   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\bridge-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000: (5.0913712s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000
E0219 05:24:14.036424   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\flannel-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-616000 -n default-k8s-diff-port-616000: (5.0144593s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (27.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9cv46" [9d30f008-bb48-4b76-bb09-6d25439e05a4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0364948s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9cv46" [9d30f008-bb48-4b76-bb09-6d25439e05a4] Running
E0219 05:26:17.879852   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-616000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0216041s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-038800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-038800 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-038800 "sudo crictl images -o json": (3.6955595s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (25.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-038800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-038800 --alsologtostderr -v=1: (3.7403785s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-038800 -n embed-certs-038800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-038800 -n embed-certs-038800: exit status 2 (4.6591807s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-038800 -n embed-certs-038800
E0219 05:26:38.373436   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-616000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-038800 -n embed-certs-038800: exit status 2 (4.5760999s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-038800 --alsologtostderr -v=1
E0219 05:26:39.543139   10148 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-843300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-038800 --alsologtostderr -v=1: (3.4322276s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-038800 -n embed-certs-038800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-038800 -n embed-certs-038800: (4.8704772s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-038800 -n embed-certs-038800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-038800 -n embed-certs-038800: (4.6867262s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (25.97s)

                                                
                                    

Test skip (29/292)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-068200 --alsologtostderr -v=1]
functional_test.go:909: output didn't produce a URL
functional_test.go:903: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-068200 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8556: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:53: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:165: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (17.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-843300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-843300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: 
* context was not found for specified context: cilium-843300
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-843300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-843300"

                                                
                                                
----------------------- debugLogs end: cilium-843300 [took: 16.1073961s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-843300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-843300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-843300: (1.3504514s)
--- SKIP: TestNetworkPlugins/group/cilium (17.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-309000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-309000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-309000: (1.3233986s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.32s)

                                                
                                    
Copied to clipboard