Test Report: Hyperkit_macOS 16845

                    
                      057581f5ed3e12353dc61aa76dd7fc4d5b684809:2023-07-07:30036
                    
                

Test fail (2/317)

Order failed test Duration
193 TestMinikubeProfile 70.81
218 TestMultiNode/serial/RestartMultiNode 155.06
x
+
TestMinikubeProfile (70.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-013000 --driver=hyperkit 
E0707 15:59:43.184771   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-013000 --driver=hyperkit : (39.42487921s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-015000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p second-015000 --driver=hyperkit : exit status 90 (18.275909206s)

                                                
                                                
-- stdout --
	* [second-015000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16845
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node second-015000 in cluster second-015000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-amd64 start -p second-015000 --driver=hyperkit ": exit status 90
panic.go:522: *** TestMinikubeProfile FAILED at 2023-07-07 16:00:16.377381 -0700 PDT m=+986.664689314
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p second-015000 -n second-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p second-015000 -n second-015000: exit status 6 (133.590181ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0707 16:00:16.500328   31361 status.go:415] kubeconfig endpoint: extract IP: "second-015000" does not appear in /Users/jenkins/minikube-integration/16845-29196/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "second-015000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "second-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-015000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-015000: (5.337408883s)
panic.go:522: *** TestMinikubeProfile FAILED at 2023-07-07 16:00:21.848825 -0700 PDT m=+992.136013538
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p first-013000 -n first-013000
helpers_test.go:244: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p first-013000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p first-013000 logs -n 25: (1.996264391s)
helpers_test.go:252: TestMinikubeProfile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	| Command |                   Args                   |           Profile           |   User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	| delete  | -p functional-571000                     | functional-571000           | jenkins  | v1.30.1 | 07 Jul 23 15:54 PDT | 07 Jul 23 15:54 PDT |
	| start   | -p image-371000                          | image-371000                | jenkins  | v1.30.1 | 07 Jul 23 15:54 PDT | 07 Jul 23 15:55 PDT |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-371000                | jenkins  | v1.30.1 | 07 Jul 23 15:55 PDT | 07 Jul 23 15:55 PDT |
	|         | ./testdata/image-build/test-normal       |                             |          |         |                     |                     |
	|         | -p image-371000                          |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-371000                | jenkins  | v1.30.1 | 07 Jul 23 15:55 PDT | 07 Jul 23 15:55 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str |                             |          |         |                     |                     |
	|         | --build-opt=no-cache                     |                             |          |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |                             |          |         |                     |                     |
	|         | image-371000                             |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-371000                | jenkins  | v1.30.1 | 07 Jul 23 15:55 PDT | 07 Jul 23 15:55 PDT |
	|         | ./testdata/image-build/test-normal       |                             |          |         |                     |                     |
	|         | --build-opt=no-cache -p                  |                             |          |         |                     |                     |
	|         | image-371000                             |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-371000                | jenkins  | v1.30.1 | 07 Jul 23 15:55 PDT | 07 Jul 23 15:55 PDT |
	|         | -f inner/Dockerfile                      |                             |          |         |                     |                     |
	|         | ./testdata/image-build/test-f            |                             |          |         |                     |                     |
	|         | -p image-371000                          |                             |          |         |                     |                     |
	| delete  | -p image-371000                          | image-371000                | jenkins  | v1.30.1 | 07 Jul 23 15:55 PDT | 07 Jul 23 15:55 PDT |
	| start   | -p ingress-addon-legacy-298000           | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:55 PDT | 07 Jul 23 15:56 PDT |
	|         | --kubernetes-version=v1.18.20            |                             |          |         |                     |                     |
	|         | --memory=4096 --wait=true                |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-298000              | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:56 PDT | 07 Jul 23 15:57 PDT |
	|         | addons enable ingress                    |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-298000              | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:57 PDT | 07 Jul 23 15:57 PDT |
	|         | addons enable ingress-dns                |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	| ssh     | ingress-addon-legacy-298000              | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:57 PDT | 07 Jul 23 15:57 PDT |
	|         | ssh curl -s http://127.0.0.1/            |                             |          |         |                     |                     |
	|         | -H 'Host: nginx.example.com'             |                             |          |         |                     |                     |
	| ip      | ingress-addon-legacy-298000 ip           | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:57 PDT | 07 Jul 23 15:57 PDT |
	| addons  | ingress-addon-legacy-298000              | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:57 PDT | 07 Jul 23 15:57 PDT |
	|         | addons disable ingress-dns               |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-298000              | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:57 PDT | 07 Jul 23 15:57 PDT |
	|         | addons disable ingress                   |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |          |         |                     |                     |
	| delete  | -p ingress-addon-legacy-298000           | ingress-addon-legacy-298000 | jenkins  | v1.30.1 | 07 Jul 23 15:57 PDT | 07 Jul 23 15:57 PDT |
	| start   | -p json-output-146000                    | json-output-146000          | testUser | v1.30.1 | 07 Jul 23 15:57 PDT | 07 Jul 23 15:59 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	|         | --memory=2200 --wait=true                |                             |          |         |                     |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| pause   | -p json-output-146000                    | json-output-146000          | testUser | v1.30.1 | 07 Jul 23 15:59 PDT | 07 Jul 23 15:59 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| unpause | -p json-output-146000                    | json-output-146000          | testUser | v1.30.1 | 07 Jul 23 15:59 PDT | 07 Jul 23 15:59 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| stop    | -p json-output-146000                    | json-output-146000          | testUser | v1.30.1 | 07 Jul 23 15:59 PDT | 07 Jul 23 15:59 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| delete  | -p json-output-146000                    | json-output-146000          | jenkins  | v1.30.1 | 07 Jul 23 15:59 PDT | 07 Jul 23 15:59 PDT |
	| start   | -p json-output-error-432000              | json-output-error-432000    | jenkins  | v1.30.1 | 07 Jul 23 15:59 PDT |                     |
	|         | --memory=2200 --output=json              |                             |          |         |                     |                     |
	|         | --wait=true --driver=fail                |                             |          |         |                     |                     |
	| delete  | -p json-output-error-432000              | json-output-error-432000    | jenkins  | v1.30.1 | 07 Jul 23 15:59 PDT | 07 Jul 23 15:59 PDT |
	| start   | -p first-013000                          | first-013000                | jenkins  | v1.30.1 | 07 Jul 23 15:59 PDT | 07 Jul 23 15:59 PDT |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| start   | -p second-015000                         | second-015000               | jenkins  | v1.30.1 | 07 Jul 23 15:59 PDT |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| delete  | -p second-015000                         | second-015000               | jenkins  | v1.30.1 | 07 Jul 23 16:00 PDT | 07 Jul 23 16:00 PDT |
	|---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/07 15:59:58
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0707 15:59:58.141099   31333 out.go:296] Setting OutFile to fd 1 ...
	I0707 15:59:58.141261   31333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:59:58.141265   31333 out.go:309] Setting ErrFile to fd 2...
	I0707 15:59:58.141267   31333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:59:58.141381   31333 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	I0707 15:59:58.142864   31333 out.go:303] Setting JSON to false
	I0707 15:59:58.162066   31333 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10763,"bootTime":1688760035,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0707 15:59:58.162138   31333 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0707 15:59:58.182545   31333 out.go:177] * [second-015000] minikube v1.30.1 on Darwin 13.4.1
	I0707 15:59:58.224799   31333 notify.go:220] Checking for updates...
	I0707 15:59:58.250758   31333 out.go:177]   - MINIKUBE_LOCATION=16845
	I0707 15:59:58.296692   31333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 15:59:58.337574   31333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0707 15:59:58.398922   31333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0707 15:59:58.419762   31333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	I0707 15:59:58.462666   31333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0707 15:59:58.484139   31333 config.go:182] Loaded profile config "first-013000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 15:59:58.484247   31333 driver.go:373] Setting default libvirt URI to qemu:///system
	I0707 15:59:58.512983   31333 out.go:177] * Using the hyperkit driver based on user configuration
	I0707 15:59:58.554654   31333 start.go:297] selected driver: hyperkit
	I0707 15:59:58.554668   31333 start.go:944] validating driver "hyperkit" against <nil>
	I0707 15:59:58.554685   31333 start.go:955] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0707 15:59:58.554896   31333 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 15:59:58.555107   31333 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16845-29196/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0707 15:59:58.563394   31333 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0707 15:59:58.566774   31333 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:59:58.566787   31333 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0707 15:59:58.566819   31333 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0707 15:59:58.569081   31333 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0707 15:59:58.569238   31333 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0707 15:59:58.569255   31333 cni.go:84] Creating CNI manager for ""
	I0707 15:59:58.569268   31333 cni.go:152] "hyperkit" driver + "docker" runtime found, recommending bridge
	I0707 15:59:58.569274   31333 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0707 15:59:58.569282   31333 start_flags.go:319] config:
	{Name:second-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:second-015000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 15:59:58.569419   31333 iso.go:125] acquiring lock: {Name:mkc26c030f62bdf6e3ab619c68665518d3e66b24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 15:59:58.611778   31333 out.go:177] * Starting control plane node second-015000 in cluster second-015000
	I0707 15:59:58.632786   31333 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 15:59:58.632858   31333 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0707 15:59:58.632886   31333 cache.go:57] Caching tarball of preloaded images
	I0707 15:59:58.633070   31333 preload.go:174] Found /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0707 15:59:58.633086   31333 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0707 15:59:58.633262   31333 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/second-015000/config.json ...
	I0707 15:59:58.633307   31333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/second-015000/config.json: {Name:mk0e380f76caf6d048e3437fec4eb46f6f3152d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 15:59:58.633928   31333 start.go:365] acquiring machines lock for second-015000: {Name:mk81f6152b3f423bf222fad0025fe3c8ddb3ea12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0707 15:59:58.634018   31333 start.go:369] acquired machines lock for "second-015000" in 74.681µs
	I0707 15:59:58.634057   31333 start.go:93] Provisioning new machine with config: &{Name:second-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:second-015000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0707 15:59:58.634137   31333 start.go:125] createHost starting for "" (driver="hyperkit")
	I0707 15:59:58.682634   31333 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0707 15:59:58.683117   31333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:59:58.683797   31333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 15:59:58.692342   31333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64713
	I0707 15:59:58.692688   31333 main.go:141] libmachine: () Calling .GetVersion
	I0707 15:59:58.693109   31333 main.go:141] libmachine: Using API Version  1
	I0707 15:59:58.693117   31333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 15:59:58.693354   31333 main.go:141] libmachine: () Calling .GetMachineName
	I0707 15:59:58.693451   31333 main.go:141] libmachine: (second-015000) Calling .GetMachineName
	I0707 15:59:58.693541   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 15:59:58.693629   31333 start.go:159] libmachine.API.Create for "second-015000" (driver="hyperkit")
	I0707 15:59:58.693644   31333 client.go:168] LocalClient.Create starting
	I0707 15:59:58.693680   31333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem
	I0707 15:59:58.693719   31333 main.go:141] libmachine: Decoding PEM data...
	I0707 15:59:58.693732   31333 main.go:141] libmachine: Parsing certificate...
	I0707 15:59:58.693795   31333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem
	I0707 15:59:58.693816   31333 main.go:141] libmachine: Decoding PEM data...
	I0707 15:59:58.693824   31333 main.go:141] libmachine: Parsing certificate...
	I0707 15:59:58.693835   31333 main.go:141] libmachine: Running pre-create checks...
	I0707 15:59:58.693841   31333 main.go:141] libmachine: (second-015000) Calling .PreCreateCheck
	I0707 15:59:58.693913   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 15:59:58.694080   31333 main.go:141] libmachine: (second-015000) Calling .GetConfigRaw
	I0707 15:59:58.694489   31333 main.go:141] libmachine: Creating machine...
	I0707 15:59:58.694494   31333 main.go:141] libmachine: (second-015000) Calling .Create
	I0707 15:59:58.694568   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 15:59:58.694691   31333 main.go:141] libmachine: (second-015000) DBG | I0707 15:59:58.694566   31341 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/16845-29196/.minikube
	I0707 15:59:58.694741   31333 main.go:141] libmachine: (second-015000) Downloading /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16845-29196/.minikube/cache/iso/amd64/minikube-v1.30.1-1688144767-16765-amd64.iso...
	I0707 15:59:58.924117   31333 main.go:141] libmachine: (second-015000) DBG | I0707 15:59:58.924038   31341 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/id_rsa...
	I0707 15:59:59.098720   31333 main.go:141] libmachine: (second-015000) DBG | I0707 15:59:59.098631   31341 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/second-015000.rawdisk...
	I0707 15:59:59.098733   31333 main.go:141] libmachine: (second-015000) DBG | Writing magic tar header
	I0707 15:59:59.098742   31333 main.go:141] libmachine: (second-015000) DBG | Writing SSH key tar header
	I0707 15:59:59.099212   31333 main.go:141] libmachine: (second-015000) DBG | I0707 15:59:59.099171   31341 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000 ...
	I0707 15:59:59.443312   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 15:59:59.443328   31333 main.go:141] libmachine: (second-015000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/hyperkit.pid
	I0707 15:59:59.443366   31333 main.go:141] libmachine: (second-015000) DBG | Using UUID ff2155b8-1d19-11ee-83e4-149d997f80ea
	I0707 15:59:59.466757   31333 main.go:141] libmachine: (second-015000) DBG | Generated MAC 5a:57:82:35:3f:0
	I0707 15:59:59.466779   31333 main.go:141] libmachine: (second-015000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-015000
	I0707 15:59:59.466808   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ff2155b8-1d19-11ee-83e4-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110420)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0707 15:59:59.466831   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ff2155b8-1d19-11ee-83e4-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110420)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0707 15:59:59.466891   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ff2155b8-1d19-11ee-83e4-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/second-015000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/bzimage,/Users/jenkins/minikube-integration/16845-29196/.minikube/
machines/second-015000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-015000"}
	I0707 15:59:59.466926   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ff2155b8-1d19-11ee-83e4-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/second-015000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/console-ring -f kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/bzimage,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-015000"
	I0707 15:59:59.466935   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0707 15:59:59.469433   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 DEBUG: hyperkit: Pid is 31342
	I0707 15:59:59.469854   31333 main.go:141] libmachine: (second-015000) DBG | Attempt 0
	I0707 15:59:59.469870   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 15:59:59.469903   31333 main.go:141] libmachine: (second-015000) DBG | hyperkit pid from json: 31342
	I0707 15:59:59.470731   31333 main.go:141] libmachine: (second-015000) DBG | Searching for 5a:57:82:35:3f:0 in /var/db/dhcpd_leases ...
	I0707 15:59:59.470816   31333 main.go:141] libmachine: (second-015000) DBG | Found 50 entries in /var/db/dhcpd_leases!
	I0707 15:59:59.470838   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.51 HWAddress:2a:a0:3b:ac:43:5 ID:1,2a:a0:3b:ac:43:5 Lease:0x64a9ea51}
	I0707 15:59:59.470845   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.50 HWAddress:e:ef:40:f6:89:98 ID:1,e:ef:40:f6:89:98 Lease:0x64a9e9f8}
	I0707 15:59:59.470854   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.49 HWAddress:2a:32:21:a:c9:ea ID:1,2a:32:21:a:c9:ea Lease:0x64a9e979}
	I0707 15:59:59.470859   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.48 HWAddress:22:ae:6d:e6:53:22 ID:1,22:ae:6d:e6:53:22 Lease:0x64a9e93c}
	I0707 15:59:59.470878   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.47 HWAddress:d2:a:15:e7:c7:12 ID:1,d2:a:15:e7:c7:12 Lease:0x64a9e851}
	I0707 15:59:59.470894   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.46 HWAddress:8a:4d:70:3e:72:d7 ID:1,8a:4d:70:3e:72:d7 Lease:0x64a9e823}
	I0707 15:59:59.470902   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.45 HWAddress:1a:8e:ab:d5:52:f4 ID:1,1a:8e:ab:d5:52:f4 Lease:0x64a9e6d4}
	I0707 15:59:59.470908   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.44 HWAddress:72:19:49:64:8b:18 ID:1,72:19:49:64:8b:18 Lease:0x64a9e658}
	I0707 15:59:59.470913   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.43 HWAddress:8a:f2:e1:4a:aa:55 ID:1,8a:f2:e1:4a:aa:55 Lease:0x64a9e511}
	I0707 15:59:59.470919   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.42 HWAddress:36:71:99:1a:a1:ca ID:1,36:71:99:1a:a1:ca Lease:0x64a9e3df}
	I0707 15:59:59.470923   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.41 HWAddress:26:7d:94:3e:ce:86 ID:1,26:7d:94:3e:ce:86 Lease:0x64a9e25c}
	I0707 15:59:59.470933   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.40 HWAddress:be:9b:40:44:bd:5e ID:1,be:9b:40:44:bd:5e Lease:0x64a9e299}
	I0707 15:59:59.470938   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.39 HWAddress:92:23:35:2f:b4:e0 ID:1,92:23:35:2f:b4:e0 Lease:0x64a9e1d4}
	I0707 15:59:59.470943   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.38 HWAddress:ca:78:94:69:83:9d ID:1,ca:78:94:69:83:9d Lease:0x64a9e1a1}
	I0707 15:59:59.470948   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.37 HWAddress:2e:6b:da:4d:43:63 ID:1,2e:6b:da:4d:43:63 Lease:0x64a9e17a}
	I0707 15:59:59.470954   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:1a:1e:78:c2:3:5b ID:1,1a:1e:78:c2:3:5b Lease:0x64a9e14d}
	I0707 15:59:59.470959   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:56:21:77:6e:3e:d7 ID:1,56:21:77:6e:3e:d7 Lease:0x64a9e11c}
	I0707 15:59:59.470969   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:b6:79:6d:78:e5:91 ID:1,b6:79:6d:78:e5:91 Lease:0x64a9e0df}
	I0707 15:59:59.470976   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:e:d:6:69:70:57 ID:1,e:d:6:69:70:57 Lease:0x64a9e0b7}
	I0707 15:59:59.470982   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:12:c:fc:38:1b:7d ID:1,12:c:fc:38:1b:7d Lease:0x64a9e07e}
	I0707 15:59:59.470988   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:2a:be:6e:1c:25:23 ID:1,2a:be:6e:1c:25:23 Lease:0x64a9e029}
	I0707 15:59:59.470994   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:f6:26:cc:59:fb:86 ID:1,f6:26:cc:59:fb:86 Lease:0x64a9e002}
	I0707 15:59:59.471001   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:32:f4:fc:97:bb:c5 ID:1,32:f4:fc:97:bb:c5 Lease:0x64a9dfd9}
	I0707 15:59:59.471007   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:f2:ac:14:df:2:e1 ID:1,f2:ac:14:df:2:e1 Lease:0x64a9dfbd}
	I0707 15:59:59.471013   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:86:40:da:85:93:5 ID:1,86:40:da:85:93:5 Lease:0x64a9dfaf}
	I0707 15:59:59.471018   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:96:a2:d0:1b:cc:48 ID:1,96:a2:d0:1b:cc:48 Lease:0x64a88e33}
	I0707 15:59:59.471023   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:7e:18:98:3:79:ac ID:1,7e:18:98:3:79:ac Lease:0x64a88e11}
	I0707 15:59:59.471028   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:a6:24:7b:fc:77:e8 ID:1,a6:24:7b:fc:77:e8 Lease:0x64a88dd3}
	I0707 15:59:59.471047   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:da:c2:99:f7:b9:2b ID:1,da:c2:99:f7:b9:2b Lease:0x64a9def7}
	I0707 15:59:59.471059   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:12:2c:d8:d3:98:3d ID:1,12:2c:d8:d3:98:3d Lease:0x64a9de95}
	I0707 15:59:59.471065   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:96:63:2d:24:c0:8e ID:1,96:63:2d:24:c0:8e Lease:0x64a9deac}
	I0707 15:59:59.471070   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ce:d:af:fb:a4:29 ID:1,ce:d:af:fb:a4:29 Lease:0x64a88d00}
	I0707 15:59:59.471080   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:c6:d1:4d:d1:c6:c6 ID:1,c6:d1:4d:d1:c6:c6 Lease:0x64a9ddd3}
	I0707 15:59:59.471085   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:4a:0:ad:11:5a:b8 ID:1,4a:0:ad:11:5a:b8 Lease:0x64a9dd66}
	I0707 15:59:59.471091   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:76:24:4a:dc:55:63 ID:1,76:24:4a:dc:55:63 Lease:0x64a9dcf9}
	I0707 15:59:59.471096   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:7a:76:a9:a4:41:d6 ID:1,7a:76:a9:a4:41:d6 Lease:0x64a9dcac}
	I0707 15:59:59.471104   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:9a:d4:1c:20:49:9a ID:1,9a:d4:1c:20:49:9a Lease:0x64a88ab6}
	I0707 15:59:59.471112   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:5e:28:a1:fd:5:74 ID:1,5e:28:a1:fd:5:74 Lease:0x64a88a2a}
	I0707 15:59:59.471121   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:5e:f2:9f:60:b5:67 ID:1,5e:f2:9f:60:b5:67 Lease:0x64a9dbf9}
	I0707 15:59:59.471128   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:ba:29:cf:65:9a:f6 ID:1,ba:29:cf:65:9a:f6 Lease:0x64a9dbc4}
	I0707 15:59:59.471134   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:c2:46:a3:47:d0:6f ID:1,c2:46:a3:47:d0:6f Lease:0x64a888da}
	I0707 15:59:59.471139   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:e6:f8:da:7f:b0:2 ID:1,e6:f8:da:7f:b0:2 Lease:0x64a888ac}
	I0707 15:59:59.471144   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:1a:e8:95:cf:b7:2c ID:1,1a:e8:95:cf:b7:2c Lease:0x64a9d9e1}
	I0707 15:59:59.471153   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:82:60:aa:b:ac:82 ID:1,82:60:aa:b:ac:82 Lease:0x64a9d9bc}
	I0707 15:59:59.471160   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:b2:5e:c6:47:87:ac ID:1,b2:5e:c6:47:87:ac Lease:0x64a9d97e}
	I0707 15:59:59.471167   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:9a:cd:ff:b7:63:f ID:1,9a:cd:ff:b7:63:f Lease:0x64a9d8f8}
	I0707 15:59:59.471173   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:d2:28:10:10:80:9f ID:1,d2:28:10:10:80:9f Lease:0x64a9d8c0}
	I0707 15:59:59.471178   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:e:3f:ec:83:e4:c9 ID:1,e:3f:ec:83:e4:c9 Lease:0x64a9d7c9}
	I0707 15:59:59.471192   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:92:45:cd:92:c5:57 ID:1,92:45:cd:92:c5:57 Lease:0x64a8863e}
	I0707 15:59:59.471206   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ee:ec:a1:83:cb:bd ID:1,ee:ec:a1:83:cb:bd Lease:0x64a9d66a}
	I0707 15:59:59.476154   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0707 15:59:59.486369   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0707 15:59:59.487142   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 15:59:59.487159   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 15:59:59.487166   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 15:59:59.487171   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 15:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:00:00.053386   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0707 16:00:00.053400   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0707 16:00:00.158517   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:00:00.158530   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:00:00.158538   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:00:00.158547   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:00:00.159407   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0707 16:00:00.159415   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0707 16:00:01.472935   31333 main.go:141] libmachine: (second-015000) DBG | Attempt 1
	I0707 16:00:01.472957   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:00:01.473043   31333 main.go:141] libmachine: (second-015000) DBG | hyperkit pid from json: 31342
	I0707 16:00:01.473812   31333 main.go:141] libmachine: (second-015000) DBG | Searching for 5a:57:82:35:3f:0 in /var/db/dhcpd_leases ...
	I0707 16:00:01.473897   31333 main.go:141] libmachine: (second-015000) DBG | Found 50 entries in /var/db/dhcpd_leases!
	I0707 16:00:01.473903   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.51 HWAddress:2a:a0:3b:ac:43:5 ID:1,2a:a0:3b:ac:43:5 Lease:0x64a9ea51}
	I0707 16:00:01.473912   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.50 HWAddress:e:ef:40:f6:89:98 ID:1,e:ef:40:f6:89:98 Lease:0x64a9e9f8}
	I0707 16:00:01.473917   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.49 HWAddress:2a:32:21:a:c9:ea ID:1,2a:32:21:a:c9:ea Lease:0x64a9e979}
	I0707 16:00:01.473926   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.48 HWAddress:22:ae:6d:e6:53:22 ID:1,22:ae:6d:e6:53:22 Lease:0x64a9e93c}
	I0707 16:00:01.473934   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.47 HWAddress:d2:a:15:e7:c7:12 ID:1,d2:a:15:e7:c7:12 Lease:0x64a9e851}
	I0707 16:00:01.473941   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.46 HWAddress:8a:4d:70:3e:72:d7 ID:1,8a:4d:70:3e:72:d7 Lease:0x64a9e823}
	I0707 16:00:01.473946   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.45 HWAddress:1a:8e:ab:d5:52:f4 ID:1,1a:8e:ab:d5:52:f4 Lease:0x64a9e6d4}
	I0707 16:00:01.473952   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.44 HWAddress:72:19:49:64:8b:18 ID:1,72:19:49:64:8b:18 Lease:0x64a9e658}
	I0707 16:00:01.473959   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.43 HWAddress:8a:f2:e1:4a:aa:55 ID:1,8a:f2:e1:4a:aa:55 Lease:0x64a9e511}
	I0707 16:00:01.473965   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.42 HWAddress:36:71:99:1a:a1:ca ID:1,36:71:99:1a:a1:ca Lease:0x64a9e3df}
	I0707 16:00:01.473970   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.41 HWAddress:26:7d:94:3e:ce:86 ID:1,26:7d:94:3e:ce:86 Lease:0x64a9e25c}
	I0707 16:00:01.473981   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.40 HWAddress:be:9b:40:44:bd:5e ID:1,be:9b:40:44:bd:5e Lease:0x64a9e299}
	I0707 16:00:01.474002   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.39 HWAddress:92:23:35:2f:b4:e0 ID:1,92:23:35:2f:b4:e0 Lease:0x64a9e1d4}
	I0707 16:00:01.474032   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.38 HWAddress:ca:78:94:69:83:9d ID:1,ca:78:94:69:83:9d Lease:0x64a9e1a1}
	I0707 16:00:01.474040   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.37 HWAddress:2e:6b:da:4d:43:63 ID:1,2e:6b:da:4d:43:63 Lease:0x64a9e17a}
	I0707 16:00:01.474046   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:1a:1e:78:c2:3:5b ID:1,1a:1e:78:c2:3:5b Lease:0x64a9e14d}
	I0707 16:00:01.474051   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:56:21:77:6e:3e:d7 ID:1,56:21:77:6e:3e:d7 Lease:0x64a9e11c}
	I0707 16:00:01.474060   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:b6:79:6d:78:e5:91 ID:1,b6:79:6d:78:e5:91 Lease:0x64a9e0df}
	I0707 16:00:01.474065   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:e:d:6:69:70:57 ID:1,e:d:6:69:70:57 Lease:0x64a9e0b7}
	I0707 16:00:01.474076   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:12:c:fc:38:1b:7d ID:1,12:c:fc:38:1b:7d Lease:0x64a9e07e}
	I0707 16:00:01.474084   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:2a:be:6e:1c:25:23 ID:1,2a:be:6e:1c:25:23 Lease:0x64a9e029}
	I0707 16:00:01.474090   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:f6:26:cc:59:fb:86 ID:1,f6:26:cc:59:fb:86 Lease:0x64a9e002}
	I0707 16:00:01.474098   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:32:f4:fc:97:bb:c5 ID:1,32:f4:fc:97:bb:c5 Lease:0x64a9dfd9}
	I0707 16:00:01.474104   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:f2:ac:14:df:2:e1 ID:1,f2:ac:14:df:2:e1 Lease:0x64a9dfbd}
	I0707 16:00:01.474109   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:86:40:da:85:93:5 ID:1,86:40:da:85:93:5 Lease:0x64a9dfaf}
	I0707 16:00:01.474116   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:96:a2:d0:1b:cc:48 ID:1,96:a2:d0:1b:cc:48 Lease:0x64a88e33}
	I0707 16:00:01.474122   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:7e:18:98:3:79:ac ID:1,7e:18:98:3:79:ac Lease:0x64a88e11}
	I0707 16:00:01.474141   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:a6:24:7b:fc:77:e8 ID:1,a6:24:7b:fc:77:e8 Lease:0x64a88dd3}
	I0707 16:00:01.474146   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:da:c2:99:f7:b9:2b ID:1,da:c2:99:f7:b9:2b Lease:0x64a9def7}
	I0707 16:00:01.474175   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:12:2c:d8:d3:98:3d ID:1,12:2c:d8:d3:98:3d Lease:0x64a9de95}
	I0707 16:00:01.474196   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:96:63:2d:24:c0:8e ID:1,96:63:2d:24:c0:8e Lease:0x64a9deac}
	I0707 16:00:01.474201   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ce:d:af:fb:a4:29 ID:1,ce:d:af:fb:a4:29 Lease:0x64a88d00}
	I0707 16:00:01.474239   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:c6:d1:4d:d1:c6:c6 ID:1,c6:d1:4d:d1:c6:c6 Lease:0x64a9ddd3}
	I0707 16:00:01.474244   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:4a:0:ad:11:5a:b8 ID:1,4a:0:ad:11:5a:b8 Lease:0x64a9dd66}
	I0707 16:00:01.474264   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:76:24:4a:dc:55:63 ID:1,76:24:4a:dc:55:63 Lease:0x64a9dcf9}
	I0707 16:00:01.474271   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:7a:76:a9:a4:41:d6 ID:1,7a:76:a9:a4:41:d6 Lease:0x64a9dcac}
	I0707 16:00:01.474276   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:9a:d4:1c:20:49:9a ID:1,9a:d4:1c:20:49:9a Lease:0x64a88ab6}
	I0707 16:00:01.474310   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:5e:28:a1:fd:5:74 ID:1,5e:28:a1:fd:5:74 Lease:0x64a88a2a}
	I0707 16:00:01.474315   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:5e:f2:9f:60:b5:67 ID:1,5e:f2:9f:60:b5:67 Lease:0x64a9dbf9}
	I0707 16:00:01.474339   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:ba:29:cf:65:9a:f6 ID:1,ba:29:cf:65:9a:f6 Lease:0x64a9dbc4}
	I0707 16:00:01.474346   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:c2:46:a3:47:d0:6f ID:1,c2:46:a3:47:d0:6f Lease:0x64a888da}
	I0707 16:00:01.474353   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:e6:f8:da:7f:b0:2 ID:1,e6:f8:da:7f:b0:2 Lease:0x64a888ac}
	I0707 16:00:01.474361   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:1a:e8:95:cf:b7:2c ID:1,1a:e8:95:cf:b7:2c Lease:0x64a9d9e1}
	I0707 16:00:01.474395   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:82:60:aa:b:ac:82 ID:1,82:60:aa:b:ac:82 Lease:0x64a9d9bc}
	I0707 16:00:01.474415   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:b2:5e:c6:47:87:ac ID:1,b2:5e:c6:47:87:ac Lease:0x64a9d97e}
	I0707 16:00:01.474441   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:9a:cd:ff:b7:63:f ID:1,9a:cd:ff:b7:63:f Lease:0x64a9d8f8}
	I0707 16:00:01.474447   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:d2:28:10:10:80:9f ID:1,d2:28:10:10:80:9f Lease:0x64a9d8c0}
	I0707 16:00:01.474471   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:e:3f:ec:83:e4:c9 ID:1,e:3f:ec:83:e4:c9 Lease:0x64a9d7c9}
	I0707 16:00:01.474479   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:92:45:cd:92:c5:57 ID:1,92:45:cd:92:c5:57 Lease:0x64a8863e}
	I0707 16:00:01.474487   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ee:ec:a1:83:cb:bd ID:1,ee:ec:a1:83:cb:bd Lease:0x64a9d66a}
	I0707 16:00:03.475265   31333 main.go:141] libmachine: (second-015000) DBG | Attempt 2
	I0707 16:00:03.475279   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:00:03.475291   31333 main.go:141] libmachine: (second-015000) DBG | hyperkit pid from json: 31342
	I0707 16:00:03.476072   31333 main.go:141] libmachine: (second-015000) DBG | Searching for 5a:57:82:35:3f:0 in /var/db/dhcpd_leases ...
	I0707 16:00:03.476147   31333 main.go:141] libmachine: (second-015000) DBG | Found 50 entries in /var/db/dhcpd_leases!
	I0707 16:00:03.476156   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.51 HWAddress:2a:a0:3b:ac:43:5 ID:1,2a:a0:3b:ac:43:5 Lease:0x64a9ea51}
	I0707 16:00:03.476164   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.50 HWAddress:e:ef:40:f6:89:98 ID:1,e:ef:40:f6:89:98 Lease:0x64a9e9f8}
	I0707 16:00:03.476174   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.49 HWAddress:2a:32:21:a:c9:ea ID:1,2a:32:21:a:c9:ea Lease:0x64a9e979}
	I0707 16:00:03.476181   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.48 HWAddress:22:ae:6d:e6:53:22 ID:1,22:ae:6d:e6:53:22 Lease:0x64a9e93c}
	I0707 16:00:03.476191   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.47 HWAddress:d2:a:15:e7:c7:12 ID:1,d2:a:15:e7:c7:12 Lease:0x64a9e851}
	I0707 16:00:03.476196   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.46 HWAddress:8a:4d:70:3e:72:d7 ID:1,8a:4d:70:3e:72:d7 Lease:0x64a9e823}
	I0707 16:00:03.476211   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.45 HWAddress:1a:8e:ab:d5:52:f4 ID:1,1a:8e:ab:d5:52:f4 Lease:0x64a9e6d4}
	I0707 16:00:03.476227   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.44 HWAddress:72:19:49:64:8b:18 ID:1,72:19:49:64:8b:18 Lease:0x64a9e658}
	I0707 16:00:03.476235   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.43 HWAddress:8a:f2:e1:4a:aa:55 ID:1,8a:f2:e1:4a:aa:55 Lease:0x64a9e511}
	I0707 16:00:03.476243   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.42 HWAddress:36:71:99:1a:a1:ca ID:1,36:71:99:1a:a1:ca Lease:0x64a9e3df}
	I0707 16:00:03.476252   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.41 HWAddress:26:7d:94:3e:ce:86 ID:1,26:7d:94:3e:ce:86 Lease:0x64a9e25c}
	I0707 16:00:03.476261   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.40 HWAddress:be:9b:40:44:bd:5e ID:1,be:9b:40:44:bd:5e Lease:0x64a9e299}
	I0707 16:00:03.476269   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.39 HWAddress:92:23:35:2f:b4:e0 ID:1,92:23:35:2f:b4:e0 Lease:0x64a9e1d4}
	I0707 16:00:03.476276   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.38 HWAddress:ca:78:94:69:83:9d ID:1,ca:78:94:69:83:9d Lease:0x64a9e1a1}
	I0707 16:00:03.476281   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.37 HWAddress:2e:6b:da:4d:43:63 ID:1,2e:6b:da:4d:43:63 Lease:0x64a9e17a}
	I0707 16:00:03.476287   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:1a:1e:78:c2:3:5b ID:1,1a:1e:78:c2:3:5b Lease:0x64a9e14d}
	I0707 16:00:03.476292   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:56:21:77:6e:3e:d7 ID:1,56:21:77:6e:3e:d7 Lease:0x64a9e11c}
	I0707 16:00:03.476300   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:b6:79:6d:78:e5:91 ID:1,b6:79:6d:78:e5:91 Lease:0x64a9e0df}
	I0707 16:00:03.476309   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:e:d:6:69:70:57 ID:1,e:d:6:69:70:57 Lease:0x64a9e0b7}
	I0707 16:00:03.476316   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:12:c:fc:38:1b:7d ID:1,12:c:fc:38:1b:7d Lease:0x64a9e07e}
	I0707 16:00:03.476321   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:2a:be:6e:1c:25:23 ID:1,2a:be:6e:1c:25:23 Lease:0x64a9e029}
	I0707 16:00:03.476340   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:f6:26:cc:59:fb:86 ID:1,f6:26:cc:59:fb:86 Lease:0x64a9e002}
	I0707 16:00:03.476352   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:32:f4:fc:97:bb:c5 ID:1,32:f4:fc:97:bb:c5 Lease:0x64a9dfd9}
	I0707 16:00:03.476361   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:f2:ac:14:df:2:e1 ID:1,f2:ac:14:df:2:e1 Lease:0x64a9dfbd}
	I0707 16:00:03.476366   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:86:40:da:85:93:5 ID:1,86:40:da:85:93:5 Lease:0x64a9dfaf}
	I0707 16:00:03.476375   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:96:a2:d0:1b:cc:48 ID:1,96:a2:d0:1b:cc:48 Lease:0x64a88e33}
	I0707 16:00:03.476382   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:7e:18:98:3:79:ac ID:1,7e:18:98:3:79:ac Lease:0x64a88e11}
	I0707 16:00:03.476387   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:a6:24:7b:fc:77:e8 ID:1,a6:24:7b:fc:77:e8 Lease:0x64a88dd3}
	I0707 16:00:03.476392   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:da:c2:99:f7:b9:2b ID:1,da:c2:99:f7:b9:2b Lease:0x64a9def7}
	I0707 16:00:03.476398   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:12:2c:d8:d3:98:3d ID:1,12:2c:d8:d3:98:3d Lease:0x64a9de95}
	I0707 16:00:03.476404   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:96:63:2d:24:c0:8e ID:1,96:63:2d:24:c0:8e Lease:0x64a9deac}
	I0707 16:00:03.476411   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ce:d:af:fb:a4:29 ID:1,ce:d:af:fb:a4:29 Lease:0x64a88d00}
	I0707 16:00:03.476416   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:c6:d1:4d:d1:c6:c6 ID:1,c6:d1:4d:d1:c6:c6 Lease:0x64a9ddd3}
	I0707 16:00:03.476421   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:4a:0:ad:11:5a:b8 ID:1,4a:0:ad:11:5a:b8 Lease:0x64a9dd66}
	I0707 16:00:03.476426   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:76:24:4a:dc:55:63 ID:1,76:24:4a:dc:55:63 Lease:0x64a9dcf9}
	I0707 16:00:03.476432   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:7a:76:a9:a4:41:d6 ID:1,7a:76:a9:a4:41:d6 Lease:0x64a9dcac}
	I0707 16:00:03.476438   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:9a:d4:1c:20:49:9a ID:1,9a:d4:1c:20:49:9a Lease:0x64a88ab6}
	I0707 16:00:03.476447   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:5e:28:a1:fd:5:74 ID:1,5e:28:a1:fd:5:74 Lease:0x64a88a2a}
	I0707 16:00:03.476454   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:5e:f2:9f:60:b5:67 ID:1,5e:f2:9f:60:b5:67 Lease:0x64a9dbf9}
	I0707 16:00:03.476459   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:ba:29:cf:65:9a:f6 ID:1,ba:29:cf:65:9a:f6 Lease:0x64a9dbc4}
	I0707 16:00:03.476464   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:c2:46:a3:47:d0:6f ID:1,c2:46:a3:47:d0:6f Lease:0x64a888da}
	I0707 16:00:03.476476   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:e6:f8:da:7f:b0:2 ID:1,e6:f8:da:7f:b0:2 Lease:0x64a888ac}
	I0707 16:00:03.476485   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:1a:e8:95:cf:b7:2c ID:1,1a:e8:95:cf:b7:2c Lease:0x64a9d9e1}
	I0707 16:00:03.476492   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:82:60:aa:b:ac:82 ID:1,82:60:aa:b:ac:82 Lease:0x64a9d9bc}
	I0707 16:00:03.476498   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:b2:5e:c6:47:87:ac ID:1,b2:5e:c6:47:87:ac Lease:0x64a9d97e}
	I0707 16:00:03.476504   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:9a:cd:ff:b7:63:f ID:1,9a:cd:ff:b7:63:f Lease:0x64a9d8f8}
	I0707 16:00:03.476509   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:d2:28:10:10:80:9f ID:1,d2:28:10:10:80:9f Lease:0x64a9d8c0}
	I0707 16:00:03.476514   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:e:3f:ec:83:e4:c9 ID:1,e:3f:ec:83:e4:c9 Lease:0x64a9d7c9}
	I0707 16:00:03.476521   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:92:45:cd:92:c5:57 ID:1,92:45:cd:92:c5:57 Lease:0x64a8863e}
	I0707 16:00:03.476529   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ee:ec:a1:83:cb:bd ID:1,ee:ec:a1:83:cb:bd Lease:0x64a9d66a}
	I0707 16:00:05.076136   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0707 16:00:05.076233   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0707 16:00:05.076242   31333 main.go:141] libmachine: (second-015000) DBG | 2023/07/07 16:00:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0707 16:00:05.477607   31333 main.go:141] libmachine: (second-015000) DBG | Attempt 3
	I0707 16:00:05.477616   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:00:05.477698   31333 main.go:141] libmachine: (second-015000) DBG | hyperkit pid from json: 31342
	I0707 16:00:05.478438   31333 main.go:141] libmachine: (second-015000) DBG | Searching for 5a:57:82:35:3f:0 in /var/db/dhcpd_leases ...
	I0707 16:00:05.478539   31333 main.go:141] libmachine: (second-015000) DBG | Found 50 entries in /var/db/dhcpd_leases!
	I0707 16:00:05.478547   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.51 HWAddress:2a:a0:3b:ac:43:5 ID:1,2a:a0:3b:ac:43:5 Lease:0x64a9ea51}
	I0707 16:00:05.478553   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.50 HWAddress:e:ef:40:f6:89:98 ID:1,e:ef:40:f6:89:98 Lease:0x64a9e9f8}
	I0707 16:00:05.478558   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.49 HWAddress:2a:32:21:a:c9:ea ID:1,2a:32:21:a:c9:ea Lease:0x64a9e979}
	I0707 16:00:05.478565   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.48 HWAddress:22:ae:6d:e6:53:22 ID:1,22:ae:6d:e6:53:22 Lease:0x64a9e93c}
	I0707 16:00:05.478576   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.47 HWAddress:d2:a:15:e7:c7:12 ID:1,d2:a:15:e7:c7:12 Lease:0x64a9e851}
	I0707 16:00:05.478587   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.46 HWAddress:8a:4d:70:3e:72:d7 ID:1,8a:4d:70:3e:72:d7 Lease:0x64a9e823}
	I0707 16:00:05.478598   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.45 HWAddress:1a:8e:ab:d5:52:f4 ID:1,1a:8e:ab:d5:52:f4 Lease:0x64a9e6d4}
	I0707 16:00:05.478606   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.44 HWAddress:72:19:49:64:8b:18 ID:1,72:19:49:64:8b:18 Lease:0x64a9e658}
	I0707 16:00:05.478611   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.43 HWAddress:8a:f2:e1:4a:aa:55 ID:1,8a:f2:e1:4a:aa:55 Lease:0x64a9e511}
	I0707 16:00:05.478617   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.42 HWAddress:36:71:99:1a:a1:ca ID:1,36:71:99:1a:a1:ca Lease:0x64a9e3df}
	I0707 16:00:05.478622   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.41 HWAddress:26:7d:94:3e:ce:86 ID:1,26:7d:94:3e:ce:86 Lease:0x64a9e25c}
	I0707 16:00:05.478633   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.40 HWAddress:be:9b:40:44:bd:5e ID:1,be:9b:40:44:bd:5e Lease:0x64a9e299}
	I0707 16:00:05.478638   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.39 HWAddress:92:23:35:2f:b4:e0 ID:1,92:23:35:2f:b4:e0 Lease:0x64a9e1d4}
	I0707 16:00:05.478644   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.38 HWAddress:ca:78:94:69:83:9d ID:1,ca:78:94:69:83:9d Lease:0x64a9e1a1}
	I0707 16:00:05.478652   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.37 HWAddress:2e:6b:da:4d:43:63 ID:1,2e:6b:da:4d:43:63 Lease:0x64a9e17a}
	I0707 16:00:05.478660   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:1a:1e:78:c2:3:5b ID:1,1a:1e:78:c2:3:5b Lease:0x64a9e14d}
	I0707 16:00:05.478671   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:56:21:77:6e:3e:d7 ID:1,56:21:77:6e:3e:d7 Lease:0x64a9e11c}
	I0707 16:00:05.478677   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:b6:79:6d:78:e5:91 ID:1,b6:79:6d:78:e5:91 Lease:0x64a9e0df}
	I0707 16:00:05.478687   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:e:d:6:69:70:57 ID:1,e:d:6:69:70:57 Lease:0x64a9e0b7}
	I0707 16:00:05.478695   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:12:c:fc:38:1b:7d ID:1,12:c:fc:38:1b:7d Lease:0x64a9e07e}
	I0707 16:00:05.478701   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:2a:be:6e:1c:25:23 ID:1,2a:be:6e:1c:25:23 Lease:0x64a9e029}
	I0707 16:00:05.478706   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:f6:26:cc:59:fb:86 ID:1,f6:26:cc:59:fb:86 Lease:0x64a9e002}
	I0707 16:00:05.478711   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:32:f4:fc:97:bb:c5 ID:1,32:f4:fc:97:bb:c5 Lease:0x64a9dfd9}
	I0707 16:00:05.478716   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:f2:ac:14:df:2:e1 ID:1,f2:ac:14:df:2:e1 Lease:0x64a9dfbd}
	I0707 16:00:05.478721   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:86:40:da:85:93:5 ID:1,86:40:da:85:93:5 Lease:0x64a9dfaf}
	I0707 16:00:05.478726   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:96:a2:d0:1b:cc:48 ID:1,96:a2:d0:1b:cc:48 Lease:0x64a88e33}
	I0707 16:00:05.478731   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:7e:18:98:3:79:ac ID:1,7e:18:98:3:79:ac Lease:0x64a88e11}
	I0707 16:00:05.478742   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:a6:24:7b:fc:77:e8 ID:1,a6:24:7b:fc:77:e8 Lease:0x64a88dd3}
	I0707 16:00:05.478749   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:da:c2:99:f7:b9:2b ID:1,da:c2:99:f7:b9:2b Lease:0x64a9def7}
	I0707 16:00:05.478755   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:12:2c:d8:d3:98:3d ID:1,12:2c:d8:d3:98:3d Lease:0x64a9de95}
	I0707 16:00:05.478760   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:96:63:2d:24:c0:8e ID:1,96:63:2d:24:c0:8e Lease:0x64a9deac}
	I0707 16:00:05.478765   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ce:d:af:fb:a4:29 ID:1,ce:d:af:fb:a4:29 Lease:0x64a88d00}
	I0707 16:00:05.478772   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:c6:d1:4d:d1:c6:c6 ID:1,c6:d1:4d:d1:c6:c6 Lease:0x64a9ddd3}
	I0707 16:00:05.478779   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:4a:0:ad:11:5a:b8 ID:1,4a:0:ad:11:5a:b8 Lease:0x64a9dd66}
	I0707 16:00:05.478786   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:76:24:4a:dc:55:63 ID:1,76:24:4a:dc:55:63 Lease:0x64a9dcf9}
	I0707 16:00:05.478791   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:7a:76:a9:a4:41:d6 ID:1,7a:76:a9:a4:41:d6 Lease:0x64a9dcac}
	I0707 16:00:05.478800   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:9a:d4:1c:20:49:9a ID:1,9a:d4:1c:20:49:9a Lease:0x64a88ab6}
	I0707 16:00:05.478805   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:5e:28:a1:fd:5:74 ID:1,5e:28:a1:fd:5:74 Lease:0x64a88a2a}
	I0707 16:00:05.478812   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:5e:f2:9f:60:b5:67 ID:1,5e:f2:9f:60:b5:67 Lease:0x64a9dbf9}
	I0707 16:00:05.478817   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:ba:29:cf:65:9a:f6 ID:1,ba:29:cf:65:9a:f6 Lease:0x64a9dbc4}
	I0707 16:00:05.478827   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:c2:46:a3:47:d0:6f ID:1,c2:46:a3:47:d0:6f Lease:0x64a888da}
	I0707 16:00:05.478833   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:e6:f8:da:7f:b0:2 ID:1,e6:f8:da:7f:b0:2 Lease:0x64a888ac}
	I0707 16:00:05.478845   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:1a:e8:95:cf:b7:2c ID:1,1a:e8:95:cf:b7:2c Lease:0x64a9d9e1}
	I0707 16:00:05.478852   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:82:60:aa:b:ac:82 ID:1,82:60:aa:b:ac:82 Lease:0x64a9d9bc}
	I0707 16:00:05.478857   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:b2:5e:c6:47:87:ac ID:1,b2:5e:c6:47:87:ac Lease:0x64a9d97e}
	I0707 16:00:05.478864   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:9a:cd:ff:b7:63:f ID:1,9a:cd:ff:b7:63:f Lease:0x64a9d8f8}
	I0707 16:00:05.478869   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:d2:28:10:10:80:9f ID:1,d2:28:10:10:80:9f Lease:0x64a9d8c0}
	I0707 16:00:05.478875   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:e:3f:ec:83:e4:c9 ID:1,e:3f:ec:83:e4:c9 Lease:0x64a9d7c9}
	I0707 16:00:05.478881   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:92:45:cd:92:c5:57 ID:1,92:45:cd:92:c5:57 Lease:0x64a8863e}
	I0707 16:00:05.478886   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ee:ec:a1:83:cb:bd ID:1,ee:ec:a1:83:cb:bd Lease:0x64a9d66a}
	I0707 16:00:07.478901   31333 main.go:141] libmachine: (second-015000) DBG | Attempt 4
	I0707 16:00:07.478914   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:00:07.478979   31333 main.go:141] libmachine: (second-015000) DBG | hyperkit pid from json: 31342
	I0707 16:00:07.479729   31333 main.go:141] libmachine: (second-015000) DBG | Searching for 5a:57:82:35:3f:0 in /var/db/dhcpd_leases ...
	I0707 16:00:07.479828   31333 main.go:141] libmachine: (second-015000) DBG | Found 50 entries in /var/db/dhcpd_leases!
	I0707 16:00:07.479844   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.51 HWAddress:2a:a0:3b:ac:43:5 ID:1,2a:a0:3b:ac:43:5 Lease:0x64a9ea51}
	I0707 16:00:07.479850   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.50 HWAddress:e:ef:40:f6:89:98 ID:1,e:ef:40:f6:89:98 Lease:0x64a9e9f8}
	I0707 16:00:07.479857   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.49 HWAddress:2a:32:21:a:c9:ea ID:1,2a:32:21:a:c9:ea Lease:0x64a9e979}
	I0707 16:00:07.479865   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.48 HWAddress:22:ae:6d:e6:53:22 ID:1,22:ae:6d:e6:53:22 Lease:0x64a9e93c}
	I0707 16:00:07.479886   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.47 HWAddress:d2:a:15:e7:c7:12 ID:1,d2:a:15:e7:c7:12 Lease:0x64a9e851}
	I0707 16:00:07.479893   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.46 HWAddress:8a:4d:70:3e:72:d7 ID:1,8a:4d:70:3e:72:d7 Lease:0x64a9e823}
	I0707 16:00:07.479897   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.45 HWAddress:1a:8e:ab:d5:52:f4 ID:1,1a:8e:ab:d5:52:f4 Lease:0x64a9e6d4}
	I0707 16:00:07.479903   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.44 HWAddress:72:19:49:64:8b:18 ID:1,72:19:49:64:8b:18 Lease:0x64a9e658}
	I0707 16:00:07.479908   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.43 HWAddress:8a:f2:e1:4a:aa:55 ID:1,8a:f2:e1:4a:aa:55 Lease:0x64a9e511}
	I0707 16:00:07.479919   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.42 HWAddress:36:71:99:1a:a1:ca ID:1,36:71:99:1a:a1:ca Lease:0x64a9e3df}
	I0707 16:00:07.479925   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.41 HWAddress:26:7d:94:3e:ce:86 ID:1,26:7d:94:3e:ce:86 Lease:0x64a9e25c}
	I0707 16:00:07.479934   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.40 HWAddress:be:9b:40:44:bd:5e ID:1,be:9b:40:44:bd:5e Lease:0x64a9e299}
	I0707 16:00:07.479939   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.39 HWAddress:92:23:35:2f:b4:e0 ID:1,92:23:35:2f:b4:e0 Lease:0x64a9e1d4}
	I0707 16:00:07.479945   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.38 HWAddress:ca:78:94:69:83:9d ID:1,ca:78:94:69:83:9d Lease:0x64a9e1a1}
	I0707 16:00:07.479950   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.37 HWAddress:2e:6b:da:4d:43:63 ID:1,2e:6b:da:4d:43:63 Lease:0x64a9e17a}
	I0707 16:00:07.479955   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:1a:1e:78:c2:3:5b ID:1,1a:1e:78:c2:3:5b Lease:0x64a9e14d}
	I0707 16:00:07.479960   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:56:21:77:6e:3e:d7 ID:1,56:21:77:6e:3e:d7 Lease:0x64a9e11c}
	I0707 16:00:07.479970   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:b6:79:6d:78:e5:91 ID:1,b6:79:6d:78:e5:91 Lease:0x64a9e0df}
	I0707 16:00:07.479976   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:e:d:6:69:70:57 ID:1,e:d:6:69:70:57 Lease:0x64a9e0b7}
	I0707 16:00:07.479983   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:12:c:fc:38:1b:7d ID:1,12:c:fc:38:1b:7d Lease:0x64a9e07e}
	I0707 16:00:07.479989   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:2a:be:6e:1c:25:23 ID:1,2a:be:6e:1c:25:23 Lease:0x64a9e029}
	I0707 16:00:07.479994   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:f6:26:cc:59:fb:86 ID:1,f6:26:cc:59:fb:86 Lease:0x64a9e002}
	I0707 16:00:07.480019   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:32:f4:fc:97:bb:c5 ID:1,32:f4:fc:97:bb:c5 Lease:0x64a9dfd9}
	I0707 16:00:07.480035   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:f2:ac:14:df:2:e1 ID:1,f2:ac:14:df:2:e1 Lease:0x64a9dfbd}
	I0707 16:00:07.480046   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:86:40:da:85:93:5 ID:1,86:40:da:85:93:5 Lease:0x64a9dfaf}
	I0707 16:00:07.480054   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:96:a2:d0:1b:cc:48 ID:1,96:a2:d0:1b:cc:48 Lease:0x64a88e33}
	I0707 16:00:07.480060   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:7e:18:98:3:79:ac ID:1,7e:18:98:3:79:ac Lease:0x64a88e11}
	I0707 16:00:07.480067   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:a6:24:7b:fc:77:e8 ID:1,a6:24:7b:fc:77:e8 Lease:0x64a88dd3}
	I0707 16:00:07.480073   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:da:c2:99:f7:b9:2b ID:1,da:c2:99:f7:b9:2b Lease:0x64a9def7}
	I0707 16:00:07.480079   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:12:2c:d8:d3:98:3d ID:1,12:2c:d8:d3:98:3d Lease:0x64a9de95}
	I0707 16:00:07.480085   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:96:63:2d:24:c0:8e ID:1,96:63:2d:24:c0:8e Lease:0x64a9deac}
	I0707 16:00:07.480093   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ce:d:af:fb:a4:29 ID:1,ce:d:af:fb:a4:29 Lease:0x64a88d00}
	I0707 16:00:07.480099   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:c6:d1:4d:d1:c6:c6 ID:1,c6:d1:4d:d1:c6:c6 Lease:0x64a9ddd3}
	I0707 16:00:07.480106   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:4a:0:ad:11:5a:b8 ID:1,4a:0:ad:11:5a:b8 Lease:0x64a9dd66}
	I0707 16:00:07.480111   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:76:24:4a:dc:55:63 ID:1,76:24:4a:dc:55:63 Lease:0x64a9dcf9}
	I0707 16:00:07.480116   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:7a:76:a9:a4:41:d6 ID:1,7a:76:a9:a4:41:d6 Lease:0x64a9dcac}
	I0707 16:00:07.480121   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:9a:d4:1c:20:49:9a ID:1,9a:d4:1c:20:49:9a Lease:0x64a88ab6}
	I0707 16:00:07.480127   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:5e:28:a1:fd:5:74 ID:1,5e:28:a1:fd:5:74 Lease:0x64a88a2a}
	I0707 16:00:07.480134   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:5e:f2:9f:60:b5:67 ID:1,5e:f2:9f:60:b5:67 Lease:0x64a9dbf9}
	I0707 16:00:07.480143   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:ba:29:cf:65:9a:f6 ID:1,ba:29:cf:65:9a:f6 Lease:0x64a9dbc4}
	I0707 16:00:07.480150   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:c2:46:a3:47:d0:6f ID:1,c2:46:a3:47:d0:6f Lease:0x64a888da}
	I0707 16:00:07.480156   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:e6:f8:da:7f:b0:2 ID:1,e6:f8:da:7f:b0:2 Lease:0x64a888ac}
	I0707 16:00:07.480163   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:1a:e8:95:cf:b7:2c ID:1,1a:e8:95:cf:b7:2c Lease:0x64a9d9e1}
	I0707 16:00:07.480169   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:82:60:aa:b:ac:82 ID:1,82:60:aa:b:ac:82 Lease:0x64a9d9bc}
	I0707 16:00:07.480174   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:b2:5e:c6:47:87:ac ID:1,b2:5e:c6:47:87:ac Lease:0x64a9d97e}
	I0707 16:00:07.480181   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:9a:cd:ff:b7:63:f ID:1,9a:cd:ff:b7:63:f Lease:0x64a9d8f8}
	I0707 16:00:07.480187   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:d2:28:10:10:80:9f ID:1,d2:28:10:10:80:9f Lease:0x64a9d8c0}
	I0707 16:00:07.480197   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:e:3f:ec:83:e4:c9 ID:1,e:3f:ec:83:e4:c9 Lease:0x64a9d7c9}
	I0707 16:00:07.480204   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:92:45:cd:92:c5:57 ID:1,92:45:cd:92:c5:57 Lease:0x64a8863e}
	I0707 16:00:07.480214   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ee:ec:a1:83:cb:bd ID:1,ee:ec:a1:83:cb:bd Lease:0x64a9d66a}
	I0707 16:00:09.480317   31333 main.go:141] libmachine: (second-015000) DBG | Attempt 5
	I0707 16:00:09.480333   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:00:09.480435   31333 main.go:141] libmachine: (second-015000) DBG | hyperkit pid from json: 31342
	I0707 16:00:09.481355   31333 main.go:141] libmachine: (second-015000) DBG | Searching for 5a:57:82:35:3f:0 in /var/db/dhcpd_leases ...
	I0707 16:00:09.481460   31333 main.go:141] libmachine: (second-015000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0707 16:00:09.481472   31333 main.go:141] libmachine: (second-015000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.52 HWAddress:5a:57:82:35:3f:0 ID:1,5a:57:82:35:3f:0 Lease:0x64a9ea78}
	I0707 16:00:09.481481   31333 main.go:141] libmachine: (second-015000) DBG | Found match: 5a:57:82:35:3f:0
	I0707 16:00:09.481486   31333 main.go:141] libmachine: (second-015000) DBG | IP: 192.168.64.52
	I0707 16:00:09.481562   31333 main.go:141] libmachine: (second-015000) Calling .GetConfigRaw
	I0707 16:00:09.482217   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:09.482344   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:09.482449   31333 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0707 16:00:09.482461   31333 main.go:141] libmachine: (second-015000) Calling .GetState
	I0707 16:00:09.482582   31333 main.go:141] libmachine: (second-015000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:00:09.482651   31333 main.go:141] libmachine: (second-015000) DBG | hyperkit pid from json: 31342
	I0707 16:00:09.483577   31333 main.go:141] libmachine: Detecting operating system of created instance...
	I0707 16:00:09.483590   31333 main.go:141] libmachine: Waiting for SSH to be available...
	I0707 16:00:09.483594   31333 main.go:141] libmachine: Getting to WaitForSSH function...
	I0707 16:00:09.483600   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:09.483759   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:09.483894   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:09.484013   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:09.484161   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:09.484325   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:09.484711   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:09.484716   31333 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0707 16:00:09.522097   31333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0707 16:00:12.598521   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0707 16:00:12.598531   31333 main.go:141] libmachine: Detecting the provisioner...
	I0707 16:00:12.598535   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:12.598678   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:12.598771   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.598865   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.598962   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:12.599086   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:12.599396   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:12.599401   31333 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0707 16:00:12.674464   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g6f2898e-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0707 16:00:12.674529   31333 main.go:141] libmachine: found compatible host: buildroot
	I0707 16:00:12.674533   31333 main.go:141] libmachine: Provisioning with buildroot...
	I0707 16:00:12.674537   31333 main.go:141] libmachine: (second-015000) Calling .GetMachineName
	I0707 16:00:12.674668   31333 buildroot.go:166] provisioning hostname "second-015000"
	I0707 16:00:12.674676   31333 main.go:141] libmachine: (second-015000) Calling .GetMachineName
	I0707 16:00:12.674785   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:12.674864   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:12.674945   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.675031   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.675104   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:12.675226   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:12.675527   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:12.675534   31333 main.go:141] libmachine: About to run SSH command:
	sudo hostname second-015000 && echo "second-015000" | sudo tee /etc/hostname
	I0707 16:00:12.758526   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: second-015000
	
	I0707 16:00:12.758545   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:12.758672   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:12.758780   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.758872   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.758954   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:12.759076   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:12.759377   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:12.759385   31333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\ssecond-015000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 second-015000/g' /etc/hosts;
				else 
					echo '127.0.1.1 second-015000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0707 16:00:12.840227   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0707 16:00:12.840242   31333 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16845-29196/.minikube CaCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16845-29196/.minikube}
	I0707 16:00:12.840257   31333 buildroot.go:174] setting up certificates
	I0707 16:00:12.840269   31333 provision.go:83] configureAuth start
	I0707 16:00:12.840274   31333 main.go:141] libmachine: (second-015000) Calling .GetMachineName
	I0707 16:00:12.840412   31333 main.go:141] libmachine: (second-015000) Calling .GetIP
	I0707 16:00:12.840498   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:12.840562   31333 provision.go:138] copyHostCerts
	I0707 16:00:12.840646   31333 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem, removing ...
	I0707 16:00:12.840652   31333 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:00:12.841459   31333 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem (1082 bytes)
	I0707 16:00:12.841654   31333 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem, removing ...
	I0707 16:00:12.841658   31333 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:00:12.841717   31333 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem (1123 bytes)
	I0707 16:00:12.841881   31333 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem, removing ...
	I0707 16:00:12.841884   31333 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:00:12.841941   31333 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem (1675 bytes)
	I0707 16:00:12.842066   31333 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem org=jenkins.second-015000 san=[192.168.64.52 192.168.64.52 localhost 127.0.0.1 minikube second-015000]
	I0707 16:00:12.901925   31333 provision.go:172] copyRemoteCerts
	I0707 16:00:12.901979   31333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0707 16:00:12.901993   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:12.902140   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:12.902251   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.902358   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:12.902460   31333 sshutil.go:53] new ssh client: &{IP:192.168.64.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/id_rsa Username:docker}
	I0707 16:00:12.946268   31333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0707 16:00:12.961728   31333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0707 16:00:12.977215   31333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0707 16:00:12.992222   31333 provision.go:86] duration metric: configureAuth took 151.939462ms
	I0707 16:00:12.992232   31333 buildroot.go:189] setting minikube options for container-runtime
	I0707 16:00:12.992352   31333 config.go:182] Loaded profile config "second-015000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:00:12.992362   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:12.992505   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:12.992588   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:12.992687   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.992766   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:12.992839   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:12.992945   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:12.993231   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:12.993236   31333 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0707 16:00:13.070244   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0707 16:00:13.070258   31333 buildroot.go:70] root file system type: tmpfs
	I0707 16:00:13.070346   31333 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0707 16:00:13.070359   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:13.070488   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:13.070571   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.070647   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.070747   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:13.070871   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:13.071175   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:13.071218   31333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0707 16:00:13.155090   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0707 16:00:13.164025   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:13.164150   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:13.164228   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.164295   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.164376   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:13.164493   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:13.164795   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:13.164808   31333 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0707 16:00:13.685541   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0707 16:00:13.685551   31333 main.go:141] libmachine: Checking connection to Docker...
	I0707 16:00:13.685556   31333 main.go:141] libmachine: (second-015000) Calling .GetURL
	I0707 16:00:13.685691   31333 main.go:141] libmachine: Docker is up and running!
	I0707 16:00:13.685696   31333 main.go:141] libmachine: Reticulating splines...
	I0707 16:00:13.685699   31333 client.go:171] LocalClient.Create took 14.991725097s
	I0707 16:00:13.685709   31333 start.go:167] duration metric: libmachine.API.Create for "second-015000" took 14.991755486s
	I0707 16:00:13.685718   31333 start.go:300] post-start starting for "second-015000" (driver="hyperkit")
	I0707 16:00:13.685727   31333 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0707 16:00:13.685738   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:13.685872   31333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0707 16:00:13.685884   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:13.685962   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:13.686050   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.686123   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:13.686199   31333 sshutil.go:53] new ssh client: &{IP:192.168.64.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/id_rsa Username:docker}
	I0707 16:00:13.729249   31333 ssh_runner.go:195] Run: cat /etc/os-release
	I0707 16:00:13.731843   31333 info.go:137] Remote host: Buildroot 2021.02.12
	I0707 16:00:13.731853   31333 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/addons for local assets ...
	I0707 16:00:13.731932   31333 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/files for local assets ...
	I0707 16:00:13.732083   31333 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> 296432.pem in /etc/ssl/certs
	I0707 16:00:13.732246   31333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0707 16:00:13.738910   31333 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem --> /etc/ssl/certs/296432.pem (1708 bytes)
	I0707 16:00:13.754748   31333 start.go:303] post-start completed in 69.022995ms
	I0707 16:00:13.754776   31333 main.go:141] libmachine: (second-015000) Calling .GetConfigRaw
	I0707 16:00:13.755341   31333 main.go:141] libmachine: (second-015000) Calling .GetIP
	I0707 16:00:13.755503   31333 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/second-015000/config.json ...
	I0707 16:00:13.755789   31333 start.go:128] duration metric: createHost completed in 15.121316241s
	I0707 16:00:13.755804   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:13.755891   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:13.755967   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.756042   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.756116   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:13.756213   31333 main.go:141] libmachine: Using SSH client type: native
	I0707 16:00:13.756512   31333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.52 22 <nil> <nil>}
	I0707 16:00:13.756517   31333 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0707 16:00:13.830707   31333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688770813.818122272
	
	I0707 16:00:13.830713   31333 fix.go:206] guest clock: 1688770813.818122272
	I0707 16:00:13.830718   31333 fix.go:219] Guest: 2023-07-07 16:00:13.818122272 -0700 PDT Remote: 2023-07-07 16:00:13.755797 -0700 PDT m=+15.646358748 (delta=62.325272ms)
	I0707 16:00:13.830734   31333 fix.go:190] guest clock delta is within tolerance: 62.325272ms
	I0707 16:00:13.830737   31333 start.go:83] releasing machines lock for "second-015000", held for 15.196380812s
	I0707 16:00:13.830753   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:13.830878   31333 main.go:141] libmachine: (second-015000) Calling .GetIP
	I0707 16:00:13.830966   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:13.831271   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:13.831353   31333 main.go:141] libmachine: (second-015000) Calling .DriverName
	I0707 16:00:13.831436   31333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0707 16:00:13.831458   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:13.831470   31333 ssh_runner.go:195] Run: cat /version.json
	I0707 16:00:13.831478   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHHostname
	I0707 16:00:13.831564   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:13.831576   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHPort
	I0707 16:00:13.831660   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.831675   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHKeyPath
	I0707 16:00:13.831752   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:13.831770   31333 main.go:141] libmachine: (second-015000) Calling .GetSSHUsername
	I0707 16:00:13.831844   31333 sshutil.go:53] new ssh client: &{IP:192.168.64.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/id_rsa Username:docker}
	I0707 16:00:13.831876   31333 sshutil.go:53] new ssh client: &{IP:192.168.64.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/second-015000/id_rsa Username:docker}
	I0707 16:00:13.870991   31333 ssh_runner.go:195] Run: systemctl --version
	I0707 16:00:13.916900   31333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0707 16:00:13.921261   31333 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0707 16:00:13.921297   31333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0707 16:00:13.932599   31333 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0707 16:00:13.932607   31333 start.go:466] detecting cgroup driver to use...
	I0707 16:00:13.932706   31333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:00:13.945583   31333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0707 16:00:13.952830   31333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0707 16:00:13.959926   31333 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0707 16:00:13.959970   31333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0707 16:00:13.967077   31333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:00:13.974097   31333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0707 16:00:13.981286   31333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:00:13.988328   31333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0707 16:00:13.995544   31333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0707 16:00:14.002706   31333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0707 16:00:14.009050   31333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0707 16:00:14.015458   31333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:00:14.098583   31333 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0707 16:00:14.109604   31333 start.go:466] detecting cgroup driver to use...
	I0707 16:00:14.109669   31333 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0707 16:00:14.120180   31333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:00:14.129464   31333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0707 16:00:14.140760   31333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:00:14.149735   31333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:00:14.158094   31333 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0707 16:00:14.184020   31333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:00:14.192933   31333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:00:14.205470   31333 ssh_runner.go:195] Run: which cri-dockerd
	I0707 16:00:14.207864   31333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0707 16:00:14.213425   31333 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0707 16:00:14.224386   31333 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0707 16:00:14.326050   31333 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0707 16:00:14.414640   31333 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0707 16:00:14.414673   31333 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0707 16:00:14.426826   31333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:00:14.513426   31333 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0707 16:00:15.821730   31333 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.308260718s)
	I0707 16:00:15.821788   31333 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0707 16:00:15.907081   31333 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0707 16:00:15.999062   31333 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0707 16:00:16.088307   31333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:00:16.185837   31333 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0707 16:00:16.226187   31333 out.go:177] 
	W0707 16:00:16.247301   31333 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0707 16:00:16.247323   31333 out.go:239] * 
	W0707 16:00:16.248541   31333 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0707 16:00:16.314874   31333 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-07-07 22:59:28 UTC, ends at Fri 2023-07-07 23:00:22 UTC. --
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.875347861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:09 first-013000 cri-dockerd[1026]: time="2023-07-07T23:00:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0fb60d771fba0b27520c5412678f3d36eee64cca7bb43ead692f0dc3fc327bf8/resolv.conf as [nameserver 192.168.64.1]"
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.979710887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.979777770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.979794997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.979804585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.986423004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.986506184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.986522893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:00:09 first-013000 dockerd[1136]: time="2023-07-07T23:00:09.986597186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:10 first-013000 cri-dockerd[1026]: time="2023-07-07T23:00:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/708cfebfb35e2df07d10ebc4feec77365bf1bf7342c4f1c85698e4cb0d31d955/resolv.conf as [nameserver 192.168.64.1]"
	Jul 07 23:00:10 first-013000 dockerd[1136]: time="2023-07-07T23:00:10.355935074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:00:10 first-013000 dockerd[1136]: time="2023-07-07T23:00:10.355997388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:10 first-013000 dockerd[1136]: time="2023-07-07T23:00:10.356021689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:00:10 first-013000 dockerd[1136]: time="2023-07-07T23:00:10.356294257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.503624005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.503700459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.503717397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.503728630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:11 first-013000 cri-dockerd[1026]: time="2023-07-07T23:00:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e4420f0a3b4bcc9994be7bddb89e76a3c140ea6d90173611954715e2c4ae7d7c/resolv.conf as [nameserver 192.168.64.1]"
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.860777746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.860840389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.860998935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:00:11 first-013000 dockerd[1136]: time="2023-07-07T23:00:11.861036688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:00:17 first-013000 cri-dockerd[1026]: time="2023-07-07T23:00:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	00a9c55e3fbf9       ead0a4a53df89       11 seconds ago      Running             coredns                   0                   e4420f0a3b4bc
	af2a3d92627ef       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   708cfebfb35e2
	f81b2313a30c2       5780543258cf0       13 seconds ago      Running             kube-proxy                0                   0fb60d771fba0
	1238639ff0ed4       41697ceeb70b3       32 seconds ago      Running             kube-scheduler            0                   22f0d7dee1ba4
	09ddd12e5bd59       86b6af7dd652c       32 seconds ago      Running             etcd                      0                   44325d0a5f320
	7d650a923d412       08a0c939e61b7       32 seconds ago      Running             kube-apiserver            0                   6b7c2ec8b52f7
	0193ba7b0d79d       7cffc01dba0e1       33 seconds ago      Running             kube-controller-manager   0                   a513fcb33309d
	
	* 
	* ==> coredns [00a9c55e3fbf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45895 - 54286 "HINFO IN 9076689397951422660.5140076821266140522. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00361186s
	
	* 
	* ==> describe nodes <==
	* Name:               first-013000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=first-013000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794
	                    minikube.k8s.io/name=first-013000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_07T15_59_56_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 07 Jul 2023 22:59:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  first-013000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 07 Jul 2023 23:00:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 07 Jul 2023 23:00:17 +0000   Fri, 07 Jul 2023 22:59:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 07 Jul 2023 23:00:17 +0000   Fri, 07 Jul 2023 22:59:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 07 Jul 2023 23:00:17 +0000   Fri, 07 Jul 2023 22:59:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 07 Jul 2023 23:00:17 +0000   Fri, 07 Jul 2023 22:59:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.51
	  Hostname:    first-013000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925796Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbb3647144bb4f379a642a10b23c3f34
	  System UUID:                e79211ee-0000-0000-9e09-149d997f80ea
	  Boot ID:                    012a44b2-dcff-48df-afad-c80a54a4d15c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-s4ln4                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     14s
	  kube-system                 etcd-first-013000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         29s
	  kube-system                 kube-apiserver-first-013000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-controller-manager-first-013000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-proxy-6zrwv                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-scheduler-first-013000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 35s)  kubelet          Node first-013000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 35s)  kubelet          Node first-013000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x7 over 35s)  kubelet          Node first-013000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27s                kubelet          Node first-013000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s                kubelet          Node first-013000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s                kubelet          Node first-013000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                26s                kubelet          Node first-013000 status is now: NodeReady
	  Normal  RegisteredNode           14s                node-controller  Node first-013000 event: Registered Node first-013000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.008973] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.145937] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.037669] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.903582] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +3.150084] systemd-fstab-generator[550]: Ignoring "noauto" for root device
	[  +0.087374] systemd-fstab-generator[561]: Ignoring "noauto" for root device
	[  +0.802036] systemd-fstab-generator[747]: Ignoring "noauto" for root device
	[  +0.211936] systemd-fstab-generator[787]: Ignoring "noauto" for root device
	[  +0.091131] systemd-fstab-generator[798]: Ignoring "noauto" for root device
	[  +0.096959] systemd-fstab-generator[811]: Ignoring "noauto" for root device
	[  +1.278729] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.156167] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +0.091563] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +0.090543] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +0.085043] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +0.103638] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
	[  +5.412673] systemd-fstab-generator[1121]: Ignoring "noauto" for root device
	[  +1.805631] kauditd_printk_skb: 29 callbacks suppressed
	[  +4.156735] systemd-fstab-generator[1443]: Ignoring "noauto" for root device
	[  +7.795613] systemd-fstab-generator[2369]: Ignoring "noauto" for root device
	[Jul 7 23:00] kauditd_printk_skb: 39 callbacks suppressed
	
	* 
	* ==> etcd [09ddd12e5bd5] <==
	* {"level":"info","ts":"2023-07-07T22:59:51.240Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-07T22:59:51.240Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b8434eb7a3474524","initial-advertise-peer-urls":["https://192.168.64.51:2380"],"listen-peer-urls":["https://192.168.64.51:2380"],"advertise-client-urls":["https://192.168.64.51:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.51:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-07T22:59:51.240Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-07T22:59:51.240Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.51:2380"}
	{"level":"info","ts":"2023-07-07T22:59:51.240Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.51:2380"}
	{"level":"info","ts":"2023-07-07T22:59:51.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8434eb7a3474524 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-07T22:59:51.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8434eb7a3474524 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-07T22:59:51.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8434eb7a3474524 received MsgPreVoteResp from b8434eb7a3474524 at term 1"}
	{"level":"info","ts":"2023-07-07T22:59:51.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8434eb7a3474524 became candidate at term 2"}
	{"level":"info","ts":"2023-07-07T22:59:51.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8434eb7a3474524 received MsgVoteResp from b8434eb7a3474524 at term 2"}
	{"level":"info","ts":"2023-07-07T22:59:51.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8434eb7a3474524 became leader at term 2"}
	{"level":"info","ts":"2023-07-07T22:59:51.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8434eb7a3474524 elected leader b8434eb7a3474524 at term 2"}
	{"level":"info","ts":"2023-07-07T22:59:51.599Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b8434eb7a3474524","local-member-attributes":"{Name:first-013000 ClientURLs:[https://192.168.64.51:2379]}","request-path":"/0/members/b8434eb7a3474524/attributes","cluster-id":"4fda3446f8920824","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-07T22:59:51.599Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-07T22:59:51.602Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.51:2379"}
	{"level":"info","ts":"2023-07-07T22:59:51.602Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-07T22:59:51.604Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-07T22:59:51.623Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-07T22:59:51.623Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-07T22:59:51.627Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4fda3446f8920824","local-member-id":"b8434eb7a3474524","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-07T22:59:51.627Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-07T22:59:51.627Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-07T22:59:51.628Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-07T22:59:58.481Z","caller":"traceutil/trace.go:171","msg":"trace[1302482345] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"100.702696ms","start":"2023-07-07T22:59:58.380Z","end":"2023-07-07T22:59:58.481Z","steps":["trace[1302482345] 'process raft request'  (duration: 100.648496ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-07T22:59:58.787Z","caller":"traceutil/trace.go:171","msg":"trace[1006946902] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"110.27163ms","start":"2023-07-07T22:59:58.677Z","end":"2023-07-07T22:59:58.787Z","steps":["trace[1006946902] 'process raft request'  (duration: 72.831398ms)","trace[1006946902] 'compare'  (duration: 37.29625ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  23:00:23 up 1 min,  0 users,  load average: 0.72, 0.26, 0.09
	Linux first-013000 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7d650a923d41] <==
	* I0707 22:59:53.270482       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0707 22:59:53.270761       1 aggregator.go:152] initial CRD sync complete...
	I0707 22:59:53.270922       1 autoregister_controller.go:141] Starting autoregister controller
	I0707 22:59:53.271034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0707 22:59:53.271082       1 cache.go:39] Caches are synced for autoregister controller
	I0707 22:59:53.321643       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0707 22:59:53.335715       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0707 22:59:53.335760       1 controller.go:150] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0707 22:59:53.405091       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0707 22:59:53.927920       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0707 22:59:54.197953       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0707 22:59:54.200958       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0707 22:59:54.200988       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0707 22:59:54.503680       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0707 22:59:54.541853       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0707 22:59:54.653378       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0707 22:59:54.657540       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.64.51]
	I0707 22:59:54.658261       1 controller.go:624] quota admission added evaluator for: endpoints
	I0707 22:59:54.661554       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0707 22:59:55.232095       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0707 22:59:56.296307       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0707 22:59:56.303923       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0707 22:59:56.311580       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0707 23:00:09.502702       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0707 23:00:09.507208       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [0193ba7b0d79] <==
	* I0707 23:00:09.490305       1 range_allocator.go:174] "Sending events to api server"
	I0707 23:00:09.490366       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0707 23:00:09.490395       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0707 23:00:09.490402       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0707 23:00:09.490554       1 shared_informer.go:318] Caches are synced for expand
	I0707 23:00:09.493020       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0707 23:00:09.496061       1 shared_informer.go:318] Caches are synced for crt configmap
	I0707 23:00:09.504651       1 shared_informer.go:318] Caches are synced for attach detach
	I0707 23:00:09.507402       1 shared_informer.go:318] Caches are synced for PV protection
	I0707 23:00:09.508591       1 range_allocator.go:380] "Set node PodCIDR" node="first-013000" podCIDRs=[10.244.0.0/24]
	I0707 23:00:09.515140       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1"
	I0707 23:00:09.532197       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6zrwv"
	I0707 23:00:09.564588       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-s4ln4"
	I0707 23:00:09.580938       1 shared_informer.go:318] Caches are synced for taint
	I0707 23:00:09.581028       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0707 23:00:09.581119       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="first-013000"
	I0707 23:00:09.581168       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0707 23:00:09.581180       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0707 23:00:09.581204       1 taint_manager.go:211] "Sending events to api server"
	I0707 23:00:09.581694       1 event.go:307] "Event occurred" object="first-013000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node first-013000 event: Registered Node first-013000 in Controller"
	I0707 23:00:09.686653       1 shared_informer.go:318] Caches are synced for resource quota
	I0707 23:00:09.693727       1 shared_informer.go:318] Caches are synced for resource quota
	I0707 23:00:10.025363       1 shared_informer.go:318] Caches are synced for garbage collector
	I0707 23:00:10.031098       1 shared_informer.go:318] Caches are synced for garbage collector
	I0707 23:00:10.031288       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [f81b2313a30c] <==
	* I0707 23:00:10.098491       1 node.go:141] Successfully retrieved node IP: 192.168.64.51
	I0707 23:00:10.098588       1 server_others.go:110] "Detected node IP" address="192.168.64.51"
	I0707 23:00:10.098611       1 server_others.go:554] "Using iptables proxy"
	I0707 23:00:10.134008       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0707 23:00:10.134152       1 server_others.go:192] "Using iptables Proxier"
	I0707 23:00:10.134212       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0707 23:00:10.134573       1 server.go:658] "Version info" version="v1.27.3"
	I0707 23:00:10.134664       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0707 23:00:10.135159       1 config.go:188] "Starting service config controller"
	I0707 23:00:10.135258       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0707 23:00:10.135312       1 config.go:97] "Starting endpoint slice config controller"
	I0707 23:00:10.135326       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0707 23:00:10.135794       1 config.go:315] "Starting node config controller"
	I0707 23:00:10.135821       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0707 23:00:10.236072       1 shared_informer.go:318] Caches are synced for node config
	I0707 23:00:10.236136       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0707 23:00:10.236079       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [1238639ff0ed] <==
	* W0707 22:59:53.262143       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0707 22:59:53.262195       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0707 22:59:53.262373       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0707 22:59:53.262403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0707 22:59:53.262450       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0707 22:59:53.262533       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0707 22:59:53.262641       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0707 22:59:53.262674       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0707 22:59:53.262695       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0707 22:59:53.262700       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0707 22:59:54.133947       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0707 22:59:54.133999       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0707 22:59:54.134455       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0707 22:59:54.134471       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0707 22:59:54.156280       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0707 22:59:54.156330       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0707 22:59:54.157234       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0707 22:59:54.157471       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0707 22:59:54.214218       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0707 22:59:54.214236       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0707 22:59:54.376641       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0707 22:59:54.376678       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0707 22:59:54.440019       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0707 22:59:54.440219       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0707 22:59:57.249142       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-07-07 22:59:28 UTC, ends at Fri 2023-07-07 23:00:23 UTC. --
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.275769    2388 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.386325    2388 apiserver.go:52] "Watching apiserver"
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.408911    2388 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.428922    2388 reconciler.go:41] "Reconciler: start to sync state"
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.503229    2388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-first-013000" podStartSLOduration=3.502263127 podCreationTimestamp="2023-07-07 22:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-07 22:59:57.502234039 +0000 UTC m=+1.228562929" watchObservedRunningTime="2023-07-07 22:59:57.502263127 +0000 UTC m=+1.228592015"
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.509665    2388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-first-013000" podStartSLOduration=1.509645084 podCreationTimestamp="2023-07-07 22:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-07 22:59:57.509184246 +0000 UTC m=+1.235513136" watchObservedRunningTime="2023-07-07 22:59:57.509645084 +0000 UTC m=+1.235973980"
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.516420    2388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-first-013000" podStartSLOduration=1.516326243 podCreationTimestamp="2023-07-07 22:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-07 22:59:57.515715046 +0000 UTC m=+1.242043936" watchObservedRunningTime="2023-07-07 22:59:57.516326243 +0000 UTC m=+1.242655132"
	Jul 07 22:59:57 first-013000 kubelet[2388]: I0707 22:59:57.532403    2388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-first-013000" podStartSLOduration=3.5323802669999997 podCreationTimestamp="2023-07-07 22:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-07 22:59:57.522496304 +0000 UTC m=+1.248825194" watchObservedRunningTime="2023-07-07 22:59:57.532380267 +0000 UTC m=+1.258709157"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.533077    2388 topology_manager.go:212] "Topology Admit Handler"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.607366    2388 topology_manager.go:212] "Topology Admit Handler"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.620265    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b103bd6c-f82a-4d20-a0f3-d587cfe7b842-kube-proxy\") pod \"kube-proxy-6zrwv\" (UID: \"b103bd6c-f82a-4d20-a0f3-d587cfe7b842\") " pod="kube-system/kube-proxy-6zrwv"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.620336    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkjm\" (UniqueName: \"kubernetes.io/projected/b103bd6c-f82a-4d20-a0f3-d587cfe7b842-kube-api-access-sjkjm\") pod \"kube-proxy-6zrwv\" (UID: \"b103bd6c-f82a-4d20-a0f3-d587cfe7b842\") " pod="kube-system/kube-proxy-6zrwv"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.620358    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b103bd6c-f82a-4d20-a0f3-d587cfe7b842-xtables-lock\") pod \"kube-proxy-6zrwv\" (UID: \"b103bd6c-f82a-4d20-a0f3-d587cfe7b842\") " pod="kube-system/kube-proxy-6zrwv"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.620373    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b103bd6c-f82a-4d20-a0f3-d587cfe7b842-lib-modules\") pod \"kube-proxy-6zrwv\" (UID: \"b103bd6c-f82a-4d20-a0f3-d587cfe7b842\") " pod="kube-system/kube-proxy-6zrwv"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.620388    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a294981e-35ca-4a70-be07-6b3a20560195-tmp\") pod \"storage-provisioner\" (UID: \"a294981e-35ca-4a70-be07-6b3a20560195\") " pod="kube-system/storage-provisioner"
	Jul 07 23:00:09 first-013000 kubelet[2388]: I0707 23:00:09.620403    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47q4\" (UniqueName: \"kubernetes.io/projected/a294981e-35ca-4a70-be07-6b3a20560195-kube-api-access-h47q4\") pod \"storage-provisioner\" (UID: \"a294981e-35ca-4a70-be07-6b3a20560195\") " pod="kube-system/storage-provisioner"
	Jul 07 23:00:10 first-013000 kubelet[2388]: I0707 23:00:10.538823    2388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6zrwv" podStartSLOduration=1.538797242 podCreationTimestamp="2023-07-07 23:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-07 23:00:10.537906545 +0000 UTC m=+14.264235435" watchObservedRunningTime="2023-07-07 23:00:10.538797242 +0000 UTC m=+14.265126138"
	Jul 07 23:00:11 first-013000 kubelet[2388]: I0707 23:00:11.161429    2388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.16140142 podCreationTimestamp="2023-07-07 22:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-07 23:00:10.55229603 +0000 UTC m=+14.278624919" watchObservedRunningTime="2023-07-07 23:00:11.16140142 +0000 UTC m=+14.887730315"
	Jul 07 23:00:11 first-013000 kubelet[2388]: I0707 23:00:11.161604    2388 topology_manager.go:212] "Topology Admit Handler"
	Jul 07 23:00:11 first-013000 kubelet[2388]: I0707 23:00:11.232308    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpb8p\" (UniqueName: \"kubernetes.io/projected/6453464f-5ed5-4793-aafe-fa6f0ba686ab-kube-api-access-jpb8p\") pod \"coredns-5d78c9869d-s4ln4\" (UID: \"6453464f-5ed5-4793-aafe-fa6f0ba686ab\") " pod="kube-system/coredns-5d78c9869d-s4ln4"
	Jul 07 23:00:11 first-013000 kubelet[2388]: I0707 23:00:11.232412    2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6453464f-5ed5-4793-aafe-fa6f0ba686ab-config-volume\") pod \"coredns-5d78c9869d-s4ln4\" (UID: \"6453464f-5ed5-4793-aafe-fa6f0ba686ab\") " pod="kube-system/coredns-5d78c9869d-s4ln4"
	Jul 07 23:00:11 first-013000 kubelet[2388]: I0707 23:00:11.791050    2388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4420f0a3b4bcc9994be7bddb89e76a3c140ea6d90173611954715e2c4ae7d7c"
	Jul 07 23:00:12 first-013000 kubelet[2388]: I0707 23:00:12.808361    2388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-s4ln4" podStartSLOduration=3.808340067 podCreationTimestamp="2023-07-07 23:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-07 23:00:12.808122041 +0000 UTC m=+16.534450932" watchObservedRunningTime="2023-07-07 23:00:12.808340067 +0000 UTC m=+16.534668957"
	Jul 07 23:00:17 first-013000 kubelet[2388]: I0707 23:00:17.187259    2388 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 07 23:00:17 first-013000 kubelet[2388]: I0707 23:00:17.188683    2388 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	* 
	* ==> storage-provisioner [af2a3d92627e] <==
	* I0707 23:00:10.401025       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0707 23:00:10.408750       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0707 23:00:10.408807       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0707 23:00:10.413785       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0707 23:00:10.414299       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd18016b-4535-49cf-ba00-26c498cb49d1", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' first-013000_2e824ee5-1485-4c22-856c-c9973aba7269 became leader
	I0707 23:00:10.414383       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_first-013000_2e824ee5-1485-4c22-856c-c9973aba7269!
	I0707 23:00:10.514970       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_first-013000_2e824ee5-1485-4c22-856c-c9973aba7269!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p first-013000 -n first-013000
helpers_test.go:261: (dbg) Run:  kubectl --context first-013000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMinikubeProfile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "first-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-013000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-013000: (5.251654474s)
--- FAIL: TestMinikubeProfile (70.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (155.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-136000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0707 16:09:19.631425   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-136000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (2m31.605454629s)

                                                
                                                
-- stdout --
	* [multinode-136000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16845
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting control plane node multinode-136000 in cluster multinode-136000
	* Restarting existing hyperkit VM for "multinode-136000" ...
	* Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-136000-m02 in cluster multinode-136000
	* Restarting existing hyperkit VM for "multinode-136000-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.64.55
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0707 16:08:28.188680   32269 out.go:296] Setting OutFile to fd 1 ...
	I0707 16:08:28.188843   32269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:08:28.188849   32269 out.go:309] Setting ErrFile to fd 2...
	I0707 16:08:28.188853   32269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:08:28.188964   32269 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	I0707 16:08:28.190438   32269 out.go:303] Setting JSON to false
	I0707 16:08:28.209923   32269 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11273,"bootTime":1688760035,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0707 16:08:28.210029   32269 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0707 16:08:28.232171   32269 out.go:177] * [multinode-136000] minikube v1.30.1 on Darwin 13.4.1
	I0707 16:08:28.274831   32269 out.go:177]   - MINIKUBE_LOCATION=16845
	I0707 16:08:28.274890   32269 notify.go:220] Checking for updates...
	I0707 16:08:28.318692   32269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:08:28.339737   32269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0707 16:08:28.381733   32269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0707 16:08:28.402814   32269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	I0707 16:08:28.444868   32269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0707 16:08:28.466642   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:08:28.467309   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.467392   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.475111   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49214
	I0707 16:08:28.475452   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.475891   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.475912   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.476164   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.476281   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.476455   32269 driver.go:373] Setting default libvirt URI to qemu:///system
	I0707 16:08:28.476700   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.476728   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.483479   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49216
	I0707 16:08:28.483793   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.484134   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.484149   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.484378   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.484479   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.511776   32269 out.go:177] * Using the hyperkit driver based on existing profile
	I0707 16:08:28.553740   32269 start.go:297] selected driver: hyperkit
	I0707 16:08:28.553760   32269 start.go:944] validating driver "hyperkit" against &{Name:multinode-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.56 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 16:08:28.553937   32269 start.go:955] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0707 16:08:28.554110   32269 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 16:08:28.554277   32269 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16845-29196/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0707 16:08:28.562351   32269 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0707 16:08:28.565856   32269 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.565878   32269 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0707 16:08:28.568178   32269 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0707 16:08:28.568229   32269 cni.go:84] Creating CNI manager for ""
	I0707 16:08:28.568237   32269 cni.go:137] 2 nodes found, recommending kindnet
	I0707 16:08:28.568261   32269 start_flags.go:319] config:
	{Name:multinode-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-136000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.56 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-dri
ver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 16:08:28.568507   32269 iso.go:125] acquiring lock: {Name:mkc26c030f62bdf6e3ab619c68665518d3e66b24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 16:08:28.610755   32269 out.go:177] * Starting control plane node multinode-136000 in cluster multinode-136000
	I0707 16:08:28.631646   32269 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 16:08:28.631696   32269 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0707 16:08:28.631714   32269 cache.go:57] Caching tarball of preloaded images
	I0707 16:08:28.631805   32269 preload.go:174] Found /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0707 16:08:28.631814   32269 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0707 16:08:28.631920   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:08:28.632354   32269 start.go:365] acquiring machines lock for multinode-136000: {Name:mk81f6152b3f423bf222fad0025fe3c8ddb3ea12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0707 16:08:28.632405   32269 start.go:369] acquired machines lock for "multinode-136000" in 39.211µs
	I0707 16:08:28.632428   32269 start.go:96] Skipping create...Using existing machine configuration
	I0707 16:08:28.632436   32269 fix.go:54] fixHost starting: 
	I0707 16:08:28.632660   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.632683   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.639995   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49218
	I0707 16:08:28.640345   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.640723   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.640736   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.640968   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.641082   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.641168   32269 main.go:141] libmachine: (multinode-136000) Calling .GetState
	I0707 16:08:28.641249   32269 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.641304   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid from json: 32119
	I0707 16:08:28.642238   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid 32119 missing from process table
	I0707 16:08:28.642280   32269 fix.go:102] recreateIfNeeded on multinode-136000: state=Stopped err=<nil>
	I0707 16:08:28.642301   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	W0707 16:08:28.642399   32269 fix.go:128] unexpected machine state, will restart: <nil>
	I0707 16:08:28.684727   32269 out.go:177] * Restarting existing hyperkit VM for "multinode-136000" ...
	I0707 16:08:28.705745   32269 main.go:141] libmachine: (multinode-136000) Calling .Start
	I0707 16:08:28.706017   32269 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.706087   32269 main.go:141] libmachine: (multinode-136000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid
	I0707 16:08:28.707885   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid 32119 missing from process table
	I0707 16:08:28.707904   32269 main.go:141] libmachine: (multinode-136000) DBG | pid 32119 is in state "Stopped"
	I0707 16:08:28.707933   32269 main.go:141] libmachine: (multinode-136000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid...
	I0707 16:08:28.708191   32269 main.go:141] libmachine: (multinode-136000) DBG | Using UUID 4429c2bc-1d1a-11ee-8196-149d997f80ea
	I0707 16:08:28.828161   32269 main.go:141] libmachine: (multinode-136000) DBG | Generated MAC 66:77:10:3:27:1c
	I0707 16:08:28.828183   32269 main.go:141] libmachine: (multinode-136000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000
	I0707 16:08:28.828311   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4429c2bc-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000436390)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.P
rocess)(nil)}
	I0707 16:08:28.828352   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4429c2bc-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000436390)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.P
rocess)(nil)}
	I0707 16:08:28.828416   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4429c2bc-1d1a-11ee-8196-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/multinode-136000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage,/Users/jenkins/minikube-integratio
n/16845-29196/.minikube/machines/multinode-136000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"}
	I0707 16:08:28.828449   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4429c2bc-1d1a-11ee-8196-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/multinode-136000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/console-ring -f kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/initrd,early
printk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"
	I0707 16:08:28.828458   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0707 16:08:28.829937   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Pid is 32285
	I0707 16:08:28.830493   32269 main.go:141] libmachine: (multinode-136000) DBG | Attempt 0
	I0707 16:08:28.830539   32269 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.830596   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid from json: 32285
	I0707 16:08:28.832320   32269 main.go:141] libmachine: (multinode-136000) DBG | Searching for 66:77:10:3:27:1c in /var/db/dhcpd_leases ...
	I0707 16:08:28.832438   32269 main.go:141] libmachine: (multinode-136000) DBG | Found 56 entries in /var/db/dhcpd_leases!
	I0707 16:08:28.832456   32269 main.go:141] libmachine: (multinode-136000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.57 HWAddress:e2:5d:8d:f1:83:3b ID:1,e2:5d:8d:f1:83:3b Lease:0x64a89ada}
	I0707 16:08:28.832470   32269 main.go:141] libmachine: (multinode-136000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.56 HWAddress:b2:4b:8:0:c2:14 ID:1,b2:4b:8:0:c2:14 Lease:0x64a9ebeb}
	I0707 16:08:28.832483   32269 main.go:141] libmachine: (multinode-136000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.55 HWAddress:66:77:10:3:27:1c ID:1,66:77:10:3:27:1c Lease:0x64a9ebb4}
	I0707 16:08:28.832495   32269 main.go:141] libmachine: (multinode-136000) DBG | Found match: 66:77:10:3:27:1c
	I0707 16:08:28.832505   32269 main.go:141] libmachine: (multinode-136000) DBG | IP: 192.168.64.55
	I0707 16:08:28.832558   32269 main.go:141] libmachine: (multinode-136000) Calling .GetConfigRaw
	I0707 16:08:28.833139   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:28.833339   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:08:28.833663   32269 machine.go:88] provisioning docker machine ...
	I0707 16:08:28.833681   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.833813   32269 main.go:141] libmachine: (multinode-136000) Calling .GetMachineName
	I0707 16:08:28.833937   32269 buildroot.go:166] provisioning hostname "multinode-136000"
	I0707 16:08:28.833953   32269 main.go:141] libmachine: (multinode-136000) Calling .GetMachineName
	I0707 16:08:28.834085   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:28.834208   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:28.834346   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:28.834466   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:28.834580   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:28.834730   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:28.835111   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:28.835122   32269 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-136000 && echo "multinode-136000" | sudo tee /etc/hostname
	I0707 16:08:28.837192   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0707 16:08:28.895034   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0707 16:08:28.895832   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:08:28.895876   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:08:28.895912   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:08:28.895933   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:08:29.260221   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0707 16:08:29.260237   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0707 16:08:29.364352   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:08:29.364373   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:08:29.364398   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:08:29.364414   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:08:29.365281   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0707 16:08:29.365290   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0707 16:08:34.206083   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0707 16:08:34.206142   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0707 16:08:34.206151   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0707 16:08:39.922084   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-136000
	
	I0707 16:08:39.922101   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:39.922230   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:39.922327   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:39.922420   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:39.922528   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:39.922696   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:39.923044   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:39.923058   32269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-136000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-136000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-136000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0707 16:08:39.993711   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0707 16:08:39.993729   32269 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16845-29196/.minikube CaCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16845-29196/.minikube}
	I0707 16:08:39.993749   32269 buildroot.go:174] setting up certificates
	I0707 16:08:39.993759   32269 provision.go:83] configureAuth start
	I0707 16:08:39.993766   32269 main.go:141] libmachine: (multinode-136000) Calling .GetMachineName
	I0707 16:08:39.993902   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:39.994001   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:39.994098   32269 provision.go:138] copyHostCerts
	I0707 16:08:39.994156   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:08:39.994215   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem, removing ...
	I0707 16:08:39.994222   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:08:39.994327   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem (1082 bytes)
	I0707 16:08:39.994516   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:08:39.994559   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem, removing ...
	I0707 16:08:39.994568   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:08:39.994636   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem (1123 bytes)
	I0707 16:08:39.994776   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:08:39.994817   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem, removing ...
	I0707 16:08:39.994821   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:08:39.994879   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem (1675 bytes)
	I0707 16:08:39.995013   32269 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem org=jenkins.multinode-136000 san=[192.168.64.55 192.168.64.55 localhost 127.0.0.1 minikube multinode-136000]
	I0707 16:08:40.207508   32269 provision.go:172] copyRemoteCerts
	I0707 16:08:40.207604   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0707 16:08:40.207620   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.207834   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.208011   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.208259   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.208434   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:40.247859   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0707 16:08:40.247966   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0707 16:08:40.264154   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0707 16:08:40.264216   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0707 16:08:40.279893   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0707 16:08:40.279989   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0707 16:08:40.295907   32269 provision.go:86] duration metric: configureAuth took 302.129917ms
	I0707 16:08:40.295919   32269 buildroot.go:189] setting minikube options for container-runtime
	I0707 16:08:40.296111   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:08:40.296152   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:40.296285   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.296381   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.296512   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.296587   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.296673   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.296780   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:40.297070   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:40.297078   32269 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0707 16:08:40.365704   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0707 16:08:40.365722   32269 buildroot.go:70] root file system type: tmpfs
	I0707 16:08:40.365782   32269 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0707 16:08:40.365795   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.365936   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.366043   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.366172   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.366278   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.366435   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:40.366733   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:40.366778   32269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0707 16:08:40.440549   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0707 16:08:40.440580   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.440718   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.440818   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.440915   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.441006   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.441156   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:40.441471   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:40.441490   32269 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0707 16:08:41.161548   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0707 16:08:41.161562   32269 machine.go:91] provisioned docker machine in 12.327619344s
	I0707 16:08:41.161572   32269 start.go:300] post-start starting for "multinode-136000" (driver="hyperkit")
	I0707 16:08:41.161584   32269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0707 16:08:41.161598   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.161796   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0707 16:08:41.161812   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.161915   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.162008   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.162091   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.162171   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:41.201693   32269 ssh_runner.go:195] Run: cat /etc/os-release
	I0707 16:08:41.204061   32269 command_runner.go:130] > NAME=Buildroot
	I0707 16:08:41.204070   32269 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0707 16:08:41.204076   32269 command_runner.go:130] > ID=buildroot
	I0707 16:08:41.204081   32269 command_runner.go:130] > VERSION_ID=2021.02.12
	I0707 16:08:41.204088   32269 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0707 16:08:41.204287   32269 info.go:137] Remote host: Buildroot 2021.02.12
	I0707 16:08:41.204298   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/addons for local assets ...
	I0707 16:08:41.204379   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/files for local assets ...
	I0707 16:08:41.204549   32269 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> 296432.pem in /etc/ssl/certs
	I0707 16:08:41.204556   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> /etc/ssl/certs/296432.pem
	I0707 16:08:41.204730   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0707 16:08:41.210831   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem --> /etc/ssl/certs/296432.pem (1708 bytes)
	I0707 16:08:41.226283   32269 start.go:303] post-start completed in 64.701658ms
	I0707 16:08:41.226297   32269 fix.go:56] fixHost completed within 12.593586593s
	I0707 16:08:41.226314   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.226440   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.226522   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.226615   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.226699   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.226825   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:41.227129   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:41.227137   32269 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0707 16:08:41.292064   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688771321.128629285
	
	I0707 16:08:41.292076   32269 fix.go:206] guest clock: 1688771321.128629285
	I0707 16:08:41.292081   32269 fix.go:219] Guest: 2023-07-07 16:08:41.128629285 -0700 PDT Remote: 2023-07-07 16:08:41.2263 -0700 PDT m=+13.070964927 (delta=-97.670715ms)
	I0707 16:08:41.292099   32269 fix.go:190] guest clock delta is within tolerance: -97.670715ms
	I0707 16:08:41.292103   32269 start.go:83] releasing machines lock for "multinode-136000", held for 12.659414156s
	I0707 16:08:41.292119   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292240   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:41.292332   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292655   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292786   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292873   32269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0707 16:08:41.292907   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.292927   32269 ssh_runner.go:195] Run: cat /version.json
	I0707 16:08:41.292938   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.293044   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.293054   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.293156   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.293169   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.293245   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.293262   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.293327   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:41.293356   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:41.370650   32269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0707 16:08:41.371560   32269 command_runner.go:130] > {"iso_version": "v1.30.1-1688144767-16765", "kicbase_version": "v0.0.39-1687538068-16731", "minikube_version": "v1.30.1", "commit": "ea1fcc3c7b384862404a5ec9a04bec1496959f9b"}
	I0707 16:08:41.371683   32269 ssh_runner.go:195] Run: systemctl --version
	I0707 16:08:41.375584   32269 command_runner.go:130] > systemd 247 (247)
	I0707 16:08:41.375601   32269 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0707 16:08:41.375907   32269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0707 16:08:41.379320   32269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0707 16:08:41.379338   32269 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0707 16:08:41.379378   32269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0707 16:08:41.389522   32269 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0707 16:08:41.389544   32269 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0707 16:08:41.389550   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:08:41.389648   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:08:41.402447   32269 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0707 16:08:41.402777   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0707 16:08:41.409298   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0707 16:08:41.415695   32269 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0707 16:08:41.415734   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0707 16:08:41.422221   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:08:41.428852   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0707 16:08:41.435480   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:08:41.442011   32269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0707 16:08:41.448679   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0707 16:08:41.455330   32269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0707 16:08:41.461097   32269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0707 16:08:41.461175   32269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0707 16:08:41.467205   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:41.551382   32269 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0707 16:08:41.564033   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:08:41.564108   32269 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0707 16:08:41.572808   32269 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0707 16:08:41.573462   32269 command_runner.go:130] > [Unit]
	I0707 16:08:41.573471   32269 command_runner.go:130] > Description=Docker Application Container Engine
	I0707 16:08:41.573476   32269 command_runner.go:130] > Documentation=https://docs.docker.com
	I0707 16:08:41.573480   32269 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0707 16:08:41.573485   32269 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0707 16:08:41.573491   32269 command_runner.go:130] > StartLimitBurst=3
	I0707 16:08:41.573495   32269 command_runner.go:130] > StartLimitIntervalSec=60
	I0707 16:08:41.573498   32269 command_runner.go:130] > [Service]
	I0707 16:08:41.573502   32269 command_runner.go:130] > Type=notify
	I0707 16:08:41.573505   32269 command_runner.go:130] > Restart=on-failure
	I0707 16:08:41.573515   32269 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0707 16:08:41.573529   32269 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0707 16:08:41.573537   32269 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0707 16:08:41.573543   32269 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0707 16:08:41.573548   32269 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0707 16:08:41.573554   32269 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0707 16:08:41.573560   32269 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0707 16:08:41.573571   32269 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0707 16:08:41.573578   32269 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0707 16:08:41.573583   32269 command_runner.go:130] > ExecStart=
	I0707 16:08:41.573596   32269 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0707 16:08:41.573604   32269 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0707 16:08:41.573611   32269 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0707 16:08:41.573616   32269 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0707 16:08:41.573620   32269 command_runner.go:130] > LimitNOFILE=infinity
	I0707 16:08:41.573625   32269 command_runner.go:130] > LimitNPROC=infinity
	I0707 16:08:41.573631   32269 command_runner.go:130] > LimitCORE=infinity
	I0707 16:08:41.573639   32269 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0707 16:08:41.573655   32269 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0707 16:08:41.573661   32269 command_runner.go:130] > TasksMax=infinity
	I0707 16:08:41.573665   32269 command_runner.go:130] > TimeoutStartSec=0
	I0707 16:08:41.573670   32269 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0707 16:08:41.573676   32269 command_runner.go:130] > Delegate=yes
	I0707 16:08:41.573684   32269 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0707 16:08:41.573688   32269 command_runner.go:130] > KillMode=process
	I0707 16:08:41.573693   32269 command_runner.go:130] > [Install]
	I0707 16:08:41.573706   32269 command_runner.go:130] > WantedBy=multi-user.target
	I0707 16:08:41.573785   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:08:41.582744   32269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0707 16:08:41.594339   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:08:41.603011   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:08:41.612260   32269 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0707 16:08:41.634151   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:08:41.647652   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:08:41.663289   32269 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0707 16:08:41.663800   32269 ssh_runner.go:195] Run: which cri-dockerd
	I0707 16:08:41.667093   32269 command_runner.go:130] > /usr/bin/cri-dockerd
	I0707 16:08:41.667387   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0707 16:08:41.676804   32269 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0707 16:08:41.694334   32269 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0707 16:08:41.787385   32269 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0707 16:08:41.877749   32269 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0707 16:08:41.877765   32269 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0707 16:08:41.889423   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:41.976291   32269 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0707 16:08:43.314053   32269 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.337714109s)
	I0707 16:08:43.314116   32269 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0707 16:08:43.397895   32269 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0707 16:08:43.482770   32269 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0707 16:08:43.575848   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:43.665156   32269 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0707 16:08:43.679827   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:43.776772   32269 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0707 16:08:43.831235   32269 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0707 16:08:43.831338   32269 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0707 16:08:43.834842   32269 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0707 16:08:43.834853   32269 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0707 16:08:43.834858   32269 command_runner.go:130] > Device: 16h/22d	Inode: 900         Links: 1
	I0707 16:08:43.834863   32269 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0707 16:08:43.834868   32269 command_runner.go:130] > Access: 2023-07-07 23:08:43.694255325 +0000
	I0707 16:08:43.834872   32269 command_runner.go:130] > Modify: 2023-07-07 23:08:43.694255325 +0000
	I0707 16:08:43.834876   32269 command_runner.go:130] > Change: 2023-07-07 23:08:43.698350968 +0000
	I0707 16:08:43.834880   32269 command_runner.go:130] >  Birth: -
	I0707 16:08:43.835371   32269 start.go:534] Will wait 60s for crictl version
	I0707 16:08:43.835423   32269 ssh_runner.go:195] Run: which crictl
	I0707 16:08:43.839790   32269 command_runner.go:130] > /usr/bin/crictl
	I0707 16:08:43.840052   32269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0707 16:08:43.865947   32269 command_runner.go:130] > Version:  0.1.0
	I0707 16:08:43.865960   32269 command_runner.go:130] > RuntimeName:  docker
	I0707 16:08:43.865964   32269 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0707 16:08:43.865968   32269 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0707 16:08:43.866855   32269 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0707 16:08:43.866939   32269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0707 16:08:43.883025   32269 command_runner.go:130] > 24.0.2
	I0707 16:08:43.883867   32269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0707 16:08:43.899887   32269 command_runner.go:130] > 24.0.2
	I0707 16:08:43.924280   32269 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0707 16:08:43.924325   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:43.924746   32269 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0707 16:08:43.929044   32269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0707 16:08:43.937109   32269 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 16:08:43.937162   32269 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0707 16:08:43.949683   32269 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.3
	I0707 16:08:43.949695   32269 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.3
	I0707 16:08:43.949703   32269 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.3
	I0707 16:08:43.949707   32269 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.3
	I0707 16:08:43.949711   32269 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0707 16:08:43.949715   32269 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0707 16:08:43.949719   32269 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0707 16:08:43.949723   32269 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0707 16:08:43.949727   32269 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0707 16:08:43.949732   32269 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0707 16:08:43.950217   32269 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0707 16:08:43.950231   32269 docker.go:566] Images already preloaded, skipping extraction
	I0707 16:08:43.950296   32269 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0707 16:08:43.962829   32269 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.3
	I0707 16:08:43.962845   32269 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.3
	I0707 16:08:43.962856   32269 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.3
	I0707 16:08:43.962862   32269 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.3
	I0707 16:08:43.962866   32269 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0707 16:08:43.962873   32269 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0707 16:08:43.962879   32269 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0707 16:08:43.962884   32269 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0707 16:08:43.962889   32269 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0707 16:08:43.962895   32269 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0707 16:08:43.963355   32269 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0707 16:08:43.963377   32269 cache_images.go:84] Images are preloaded, skipping loading
	I0707 16:08:43.963448   32269 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0707 16:08:43.980303   32269 command_runner.go:130] > cgroupfs
	I0707 16:08:43.980867   32269 cni.go:84] Creating CNI manager for ""
	I0707 16:08:43.980877   32269 cni.go:137] 2 nodes found, recommending kindnet
	I0707 16:08:43.980891   32269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0707 16:08:43.980906   32269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.55 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-136000 NodeName:multinode-136000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0707 16:08:43.980995   32269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-136000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0707 16:08:43.981055   32269 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-136000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0707 16:08:43.981114   32269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0707 16:08:43.987846   32269 command_runner.go:130] > kubeadm
	I0707 16:08:43.987858   32269 command_runner.go:130] > kubectl
	I0707 16:08:43.987862   32269 command_runner.go:130] > kubelet
	I0707 16:08:43.987979   32269 binaries.go:44] Found k8s binaries, skipping transfer
	I0707 16:08:43.988029   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0707 16:08:43.994279   32269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0707 16:08:44.005201   32269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0707 16:08:44.016225   32269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0707 16:08:44.027328   32269 ssh_runner.go:195] Run: grep 192.168.64.55	control-plane.minikube.internal$ /etc/hosts
	I0707 16:08:44.029559   32269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0707 16:08:44.037379   32269 certs.go:56] Setting up /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000 for IP: 192.168.64.55
	I0707 16:08:44.037393   32269 certs.go:190] acquiring lock for shared ca certs: {Name:mkd09f0b55668af08c319f1908565cfe1a95e4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:08:44.037555   32269 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.key
	I0707 16:08:44.037614   32269 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.key
	I0707 16:08:44.037696   32269 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.key
	I0707 16:08:44.037764   32269 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.key.07b57284
	I0707 16:08:44.037824   32269 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.key
	I0707 16:08:44.037833   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0707 16:08:44.037861   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0707 16:08:44.037887   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0707 16:08:44.037907   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0707 16:08:44.037926   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0707 16:08:44.037943   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0707 16:08:44.037960   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0707 16:08:44.037978   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0707 16:08:44.038072   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643.pem (1338 bytes)
	W0707 16:08:44.038118   32269 certs.go:433] ignoring /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643_empty.pem, impossibly tiny 0 bytes
	I0707 16:08:44.038129   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem (1679 bytes)
	I0707 16:08:44.038164   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem (1082 bytes)
	I0707 16:08:44.038197   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem (1123 bytes)
	I0707 16:08:44.038226   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem (1675 bytes)
	I0707 16:08:44.038290   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem (1708 bytes)
	I0707 16:08:44.038319   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.038340   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.038357   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643.pem -> /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.038740   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0707 16:08:44.054252   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0707 16:08:44.070311   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0707 16:08:44.085637   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0707 16:08:44.101541   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0707 16:08:44.116839   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0707 16:08:44.132462   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0707 16:08:44.147925   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0707 16:08:44.163266   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem --> /usr/share/ca-certificates/296432.pem (1708 bytes)
	I0707 16:08:44.178544   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0707 16:08:44.193815   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643.pem --> /usr/share/ca-certificates/29643.pem (1338 bytes)
	I0707 16:08:44.208883   32269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0707 16:08:44.220220   32269 ssh_runner.go:195] Run: openssl version
	I0707 16:08:44.223410   32269 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0707 16:08:44.223608   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296432.pem && ln -fs /usr/share/ca-certificates/296432.pem /etc/ssl/certs/296432.pem"
	I0707 16:08:44.230755   32269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.233434   32269 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  7 22:50 /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.233642   32269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  7 22:50 /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.233677   32269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.236885   32269 command_runner.go:130] > 3ec20f2e
	I0707 16:08:44.237079   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/296432.pem /etc/ssl/certs/3ec20f2e.0"
	I0707 16:08:44.244265   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0707 16:08:44.251283   32269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.253919   32269 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  7 22:44 /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.254063   32269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  7 22:44 /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.254096   32269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.257333   32269 command_runner.go:130] > b5213941
	I0707 16:08:44.257538   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0707 16:08:44.264486   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29643.pem && ln -fs /usr/share/ca-certificates/29643.pem /etc/ssl/certs/29643.pem"
	I0707 16:08:44.271548   32269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.274210   32269 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  7 22:50 /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.274393   32269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  7 22:50 /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.274424   32269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.277706   32269 command_runner.go:130] > 51391683
	I0707 16:08:44.277945   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/29643.pem /etc/ssl/certs/51391683.0"
	I0707 16:08:44.285047   32269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0707 16:08:44.287633   32269 command_runner.go:130] > ca.crt
	I0707 16:08:44.287644   32269 command_runner.go:130] > ca.key
	I0707 16:08:44.287654   32269 command_runner.go:130] > healthcheck-client.crt
	I0707 16:08:44.287659   32269 command_runner.go:130] > healthcheck-client.key
	I0707 16:08:44.287663   32269 command_runner.go:130] > peer.crt
	I0707 16:08:44.287667   32269 command_runner.go:130] > peer.key
	I0707 16:08:44.287670   32269 command_runner.go:130] > server.crt
	I0707 16:08:44.287673   32269 command_runner.go:130] > server.key
	I0707 16:08:44.287842   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0707 16:08:44.291146   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.291335   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0707 16:08:44.294719   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.294920   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0707 16:08:44.298233   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.298420   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0707 16:08:44.301688   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.301884   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0707 16:08:44.305211   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.305423   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0707 16:08:44.308701   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.308893   32269 kubeadm.go:404] StartCluster: {Name:multinode-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.27.3 ClusterName:multinode-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.56 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 16:08:44.308997   32269 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0707 16:08:44.322413   32269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0707 16:08:44.329076   32269 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0707 16:08:44.329086   32269 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0707 16:08:44.329091   32269 command_runner.go:130] > /var/lib/minikube/etcd:
	I0707 16:08:44.329094   32269 command_runner.go:130] > member
	I0707 16:08:44.329128   32269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0707 16:08:44.329146   32269 kubeadm.go:636] restartCluster start
	I0707 16:08:44.329187   32269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0707 16:08:44.335693   32269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:44.335980   32269 kubeconfig.go:135] verify returned: extract IP: "multinode-136000" does not appear in /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:08:44.336048   32269 kubeconfig.go:146] "multinode-136000" context is missing from /Users/jenkins/minikube-integration/16845-29196/kubeconfig - will repair!
	I0707 16:08:44.336209   32269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16845-29196/kubeconfig: {Name:mkd0efbd118d508759ab2c0498693bc4c84ef656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:08:44.336801   32269 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:08:44.336976   32269 kapi.go:59] client config for multinode-136000: &rest.Config{Host:"https://192.168.64.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.key", CAFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0707 16:08:44.337453   32269 cert_rotation.go:137] Starting client certificate rotation controller
	I0707 16:08:44.337614   32269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0707 16:08:44.343808   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:44.343845   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:44.352209   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:44.854326   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:44.854488   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:44.865603   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:45.354267   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:45.354405   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:45.365901   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:45.854398   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:45.854547   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:45.865503   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:46.354376   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:46.354582   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:46.366017   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:46.854381   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:46.854538   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:46.866310   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:47.354410   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:47.354563   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:47.366380   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:47.854421   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:47.854575   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:47.865241   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:48.353084   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:48.353267   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:48.364610   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:48.853374   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:48.853531   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:48.864489   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:49.354426   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:49.354633   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:49.366393   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:49.853192   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:49.853302   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:49.862923   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:50.354479   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:50.354633   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:50.365102   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:50.854598   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:50.854769   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:50.864504   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:51.354496   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:51.354652   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:51.366012   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:51.854498   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:51.854657   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:51.866081   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:52.354502   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:52.354700   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:52.365767   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:52.854549   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:52.854721   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:52.865216   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:53.353325   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:53.353433   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:53.364628   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:53.854579   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:53.854708   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:53.865742   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:54.345206   32269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0707 16:08:54.345236   32269 kubeadm.go:1128] stopping kube-system containers ...
	I0707 16:08:54.345351   32269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0707 16:08:54.364286   32269 command_runner.go:130] > a518f066f2a8
	I0707 16:08:54.364297   32269 command_runner.go:130] > 5446c9eb3ec8
	I0707 16:08:54.364300   32269 command_runner.go:130] > 3b27f9dc5b00
	I0707 16:08:54.364303   32269 command_runner.go:130] > b1b16ce0e1c2
	I0707 16:08:54.364307   32269 command_runner.go:130] > 55a8f58d8c0e
	I0707 16:08:54.364310   32269 command_runner.go:130] > df2ce2928fd1
	I0707 16:08:54.364314   32269 command_runner.go:130] > 76e1078f7728
	I0707 16:08:54.364318   32269 command_runner.go:130] > 116c42927310
	I0707 16:08:54.364324   32269 command_runner.go:130] > 2f325ef45b4f
	I0707 16:08:54.364331   32269 command_runner.go:130] > de3cae1acc39
	I0707 16:08:54.364337   32269 command_runner.go:130] > b2c1151ec663
	I0707 16:08:54.364343   32269 command_runner.go:130] > 50f3c898eb77
	I0707 16:08:54.364348   32269 command_runner.go:130] > 317ce02a7796
	I0707 16:08:54.364352   32269 command_runner.go:130] > 1cd6ba509687
	I0707 16:08:54.364355   32269 command_runner.go:130] > 9278b14b49d4
	I0707 16:08:54.364359   32269 command_runner.go:130] > d462026e5304
	I0707 16:08:54.364362   32269 command_runner.go:130] > ef7a96b917fd
	I0707 16:08:54.364366   32269 command_runner.go:130] > bcff6ac1bb02
	I0707 16:08:54.364369   32269 command_runner.go:130] > 93d5297f53d3
	I0707 16:08:54.364375   32269 command_runner.go:130] > 1e81fc329386
	I0707 16:08:54.364378   32269 command_runner.go:130] > cd3e620f0d40
	I0707 16:08:54.364382   32269 command_runner.go:130] > bb551cae3442
	I0707 16:08:54.364385   32269 command_runner.go:130] > 550e6ada05cb
	I0707 16:08:54.364388   32269 command_runner.go:130] > deb47344a0c7
	I0707 16:08:54.364392   32269 command_runner.go:130] > e209537350e5
	I0707 16:08:54.364395   32269 command_runner.go:130] > 69a988d9753c
	I0707 16:08:54.364398   32269 command_runner.go:130] > d9bf8dafc1ef
	I0707 16:08:54.364401   32269 command_runner.go:130] > d7bfdc2352e7
	I0707 16:08:54.364405   32269 command_runner.go:130] > 1b78fb311f21
	I0707 16:08:54.364408   32269 command_runner.go:130] > 6725ed88dcdf
	I0707 16:08:54.364412   32269 command_runner.go:130] > bbb8888a48de
	I0707 16:08:54.364424   32269 docker.go:462] Stopping containers: [a518f066f2a8 5446c9eb3ec8 3b27f9dc5b00 b1b16ce0e1c2 55a8f58d8c0e df2ce2928fd1 76e1078f7728 116c42927310 2f325ef45b4f de3cae1acc39 b2c1151ec663 50f3c898eb77 317ce02a7796 1cd6ba509687 9278b14b49d4 d462026e5304 ef7a96b917fd bcff6ac1bb02 93d5297f53d3 1e81fc329386 cd3e620f0d40 bb551cae3442 550e6ada05cb deb47344a0c7 e209537350e5 69a988d9753c d9bf8dafc1ef d7bfdc2352e7 1b78fb311f21 6725ed88dcdf bbb8888a48de]
	I0707 16:08:54.364495   32269 ssh_runner.go:195] Run: docker stop a518f066f2a8 5446c9eb3ec8 3b27f9dc5b00 b1b16ce0e1c2 55a8f58d8c0e df2ce2928fd1 76e1078f7728 116c42927310 2f325ef45b4f de3cae1acc39 b2c1151ec663 50f3c898eb77 317ce02a7796 1cd6ba509687 9278b14b49d4 d462026e5304 ef7a96b917fd bcff6ac1bb02 93d5297f53d3 1e81fc329386 cd3e620f0d40 bb551cae3442 550e6ada05cb deb47344a0c7 e209537350e5 69a988d9753c d9bf8dafc1ef d7bfdc2352e7 1b78fb311f21 6725ed88dcdf bbb8888a48de
	I0707 16:08:54.379408   32269 command_runner.go:130] > a518f066f2a8
	I0707 16:08:54.379459   32269 command_runner.go:130] > 5446c9eb3ec8
	I0707 16:08:54.379464   32269 command_runner.go:130] > 3b27f9dc5b00
	I0707 16:08:54.379472   32269 command_runner.go:130] > b1b16ce0e1c2
	I0707 16:08:54.379476   32269 command_runner.go:130] > 55a8f58d8c0e
	I0707 16:08:54.379480   32269 command_runner.go:130] > df2ce2928fd1
	I0707 16:08:54.379641   32269 command_runner.go:130] > 76e1078f7728
	I0707 16:08:54.379648   32269 command_runner.go:130] > 116c42927310
	I0707 16:08:54.379652   32269 command_runner.go:130] > 2f325ef45b4f
	I0707 16:08:54.379656   32269 command_runner.go:130] > de3cae1acc39
	I0707 16:08:54.379659   32269 command_runner.go:130] > b2c1151ec663
	I0707 16:08:54.379663   32269 command_runner.go:130] > 50f3c898eb77
	I0707 16:08:54.379666   32269 command_runner.go:130] > 317ce02a7796
	I0707 16:08:54.379670   32269 command_runner.go:130] > 1cd6ba509687
	I0707 16:08:54.379673   32269 command_runner.go:130] > 9278b14b49d4
	I0707 16:08:54.379676   32269 command_runner.go:130] > d462026e5304
	I0707 16:08:54.379681   32269 command_runner.go:130] > ef7a96b917fd
	I0707 16:08:54.379684   32269 command_runner.go:130] > bcff6ac1bb02
	I0707 16:08:54.379688   32269 command_runner.go:130] > 93d5297f53d3
	I0707 16:08:54.379694   32269 command_runner.go:130] > 1e81fc329386
	I0707 16:08:54.379698   32269 command_runner.go:130] > cd3e620f0d40
	I0707 16:08:54.379707   32269 command_runner.go:130] > bb551cae3442
	I0707 16:08:54.379712   32269 command_runner.go:130] > 550e6ada05cb
	I0707 16:08:54.379716   32269 command_runner.go:130] > deb47344a0c7
	I0707 16:08:54.379719   32269 command_runner.go:130] > e209537350e5
	I0707 16:08:54.379723   32269 command_runner.go:130] > 69a988d9753c
	I0707 16:08:54.379726   32269 command_runner.go:130] > d9bf8dafc1ef
	I0707 16:08:54.379730   32269 command_runner.go:130] > d7bfdc2352e7
	I0707 16:08:54.379733   32269 command_runner.go:130] > 1b78fb311f21
	I0707 16:08:54.379737   32269 command_runner.go:130] > 6725ed88dcdf
	I0707 16:08:54.379740   32269 command_runner.go:130] > bbb8888a48de
	I0707 16:08:54.380488   32269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0707 16:08:54.393263   32269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0707 16:08:54.400009   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0707 16:08:54.400019   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0707 16:08:54.400024   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0707 16:08:54.400031   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0707 16:08:54.400140   32269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0707 16:08:54.400180   32269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0707 16:08:54.406745   32269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0707 16:08:54.406762   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:54.474728   32269 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0707 16:08:54.474973   32269 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0707 16:08:54.475328   32269 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0707 16:08:54.475614   32269 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0707 16:08:54.475996   32269 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0707 16:08:54.476373   32269 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0707 16:08:54.476824   32269 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0707 16:08:54.477172   32269 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0707 16:08:54.477532   32269 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0707 16:08:54.477809   32269 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0707 16:08:54.478188   32269 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0707 16:08:54.479247   32269 command_runner.go:130] > [certs] Using the existing "sa" key
	I0707 16:08:54.479283   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:54.520074   32269 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0707 16:08:54.590387   32269 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0707 16:08:54.827680   32269 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0707 16:08:54.893985   32269 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0707 16:08:55.087888   32269 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0707 16:08:55.089870   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:55.140213   32269 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0707 16:08:55.141077   32269 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0707 16:08:55.141281   32269 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0707 16:08:55.246438   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:55.294366   32269 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0707 16:08:55.294380   32269 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0707 16:08:55.297942   32269 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0707 16:08:55.298676   32269 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0707 16:08:55.300049   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:55.339145   32269 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0707 16:08:55.349017   32269 api_server.go:52] waiting for apiserver process to appear ...
	I0707 16:08:55.349095   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:55.858207   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:56.358024   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:56.857866   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:57.358507   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:57.857883   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:57.867774   32269 command_runner.go:130] > 1700
	I0707 16:08:57.867991   32269 api_server.go:72] duration metric: took 2.518920572s to wait for apiserver process to appear ...
	I0707 16:08:57.868002   32269 api_server.go:88] waiting for apiserver healthz status ...
	I0707 16:08:57.868015   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:00.630569   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0707 16:09:00.630595   32269 api_server.go:103] status: https://192.168.64.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0707 16:09:01.132143   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:01.137719   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0707 16:09:01.137735   32269 api_server.go:103] status: https://192.168.64.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0707 16:09:01.631260   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:01.638340   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0707 16:09:01.638357   32269 api_server.go:103] status: https://192.168.64.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0707 16:09:02.130776   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:02.134171   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 200:
	ok
	I0707 16:09:02.134227   32269 round_trippers.go:463] GET https://192.168.64.55:8443/version
	I0707 16:09:02.134232   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:02.134247   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:02.134253   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:02.140004   32269 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0707 16:09:02.140017   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:02.140023   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:02.140028   32269 round_trippers.go:580]     Content-Length: 263
	I0707 16:09:02.140033   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:02 GMT
	I0707 16:09:02.140037   32269 round_trippers.go:580]     Audit-Id: 247db14d-bb03-46ad-ba2e-78fb14351827
	I0707 16:09:02.140042   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:02.140046   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:02.140052   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:02.140068   32269 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0707 16:09:02.140114   32269 api_server.go:141] control plane version: v1.27.3
	I0707 16:09:02.140123   32269 api_server.go:131] duration metric: took 4.272022782s to wait for apiserver health ...
	I0707 16:09:02.140129   32269 cni.go:84] Creating CNI manager for ""
	I0707 16:09:02.140135   32269 cni.go:137] 2 nodes found, recommending kindnet
	I0707 16:09:02.178134   32269 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0707 16:09:02.215039   32269 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0707 16:09:02.220489   32269 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0707 16:09:02.220502   32269 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0707 16:09:02.220507   32269 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0707 16:09:02.220513   32269 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0707 16:09:02.220517   32269 command_runner.go:130] > Access: 2023-07-07 23:08:37.211250078 +0000
	I0707 16:09:02.220522   32269 command_runner.go:130] > Modify: 2023-06-30 22:28:30.000000000 +0000
	I0707 16:09:02.220527   32269 command_runner.go:130] > Change: 2023-07-07 23:08:35.896250169 +0000
	I0707 16:09:02.220530   32269 command_runner.go:130] >  Birth: -
	I0707 16:09:02.220560   32269 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0707 16:09:02.220567   32269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0707 16:09:02.252321   32269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0707 16:09:03.177128   32269 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0707 16:09:03.179552   32269 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0707 16:09:03.181208   32269 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0707 16:09:03.189806   32269 command_runner.go:130] > daemonset.apps/kindnet configured
	I0707 16:09:03.211313   32269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0707 16:09:03.211409   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:03.211419   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.211434   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.211445   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.215768   32269 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0707 16:09:03.215782   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.215788   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.215793   32269 round_trippers.go:580]     Audit-Id: 5b1c3cf4-fcc3-40ca-8919-b9d3264790e0
	I0707 16:09:03.215799   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.215806   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.215813   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.215820   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.216660   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1098"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84962 chars]
	I0707 16:09:03.219748   32269 system_pods.go:59] 12 kube-system pods found
	I0707 16:09:03.219763   32269 system_pods.go:61] "coredns-5d78c9869d-78qmb" [d9671f13-fa08-4161-b216-53f645b9a1c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0707 16:09:03.219769   32269 system_pods.go:61] "etcd-multinode-136000" [636b837f-c544-4688-aa2b-2f602c1546c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0707 16:09:03.219773   32269 system_pods.go:61] "kindnet-gj2vg" [596c8647-685e-449c-86c0-9aeb7dddb2f5] Running
	I0707 16:09:03.219778   32269 system_pods.go:61] "kindnet-h8rpq" [30c883b3-9941-48da-a543-d1649a5418f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0707 16:09:03.219782   32269 system_pods.go:61] "kindnet-zpx7k" [179bc03c-a64f-48bc-9bb9-52e5c91e5037] Running
	I0707 16:09:03.219786   32269 system_pods.go:61] "kube-apiserver-multinode-136000" [e33f6220-5f99-43a2-adc8-49399f82e89c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0707 16:09:03.219792   32269 system_pods.go:61] "kube-controller-manager-multinode-136000" [a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0707 16:09:03.219796   32269 system_pods.go:61] "kube-proxy-5865g" [3b0f7832-d4d7-41e7-ab55-08284cf98427] Running
	I0707 16:09:03.219800   32269 system_pods.go:61] "kube-proxy-dvrg9" [f7473507-c702-444e-b727-71c8a8cc4c08] Running
	I0707 16:09:03.219808   32269 system_pods.go:61] "kube-proxy-wd4p8" [4979ea40-a983-4f80-b7ac-f6e05cd5f6b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0707 16:09:03.219815   32269 system_pods.go:61] "kube-scheduler-multinode-136000" [90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0707 16:09:03.219821   32269 system_pods.go:61] "storage-provisioner" [e617383f-c16f-44a7-a1a4-a2813ecc84f2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0707 16:09:03.219826   32269 system_pods.go:74] duration metric: took 8.5022ms to wait for pod list to return data ...
	I0707 16:09:03.219834   32269 node_conditions.go:102] verifying NodePressure condition ...
	I0707 16:09:03.219871   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes
	I0707 16:09:03.219875   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.219881   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.219888   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.221711   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.221722   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.221728   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.221733   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.221739   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.221744   32269 round_trippers.go:580]     Audit-Id: 12f0975b-b999-4702-b63c-2ebacf21d7d1
	I0707 16:09:03.221748   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.221754   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.221847   32269 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1098"},"items":[{"metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9599 chars]
	I0707 16:09:03.222287   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:03.222302   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:03.222311   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:03.222315   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:03.222322   32269 node_conditions.go:105] duration metric: took 2.481054ms to run NodePressure ...
	I0707 16:09:03.222332   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:09:03.321474   32269 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0707 16:09:03.356359   32269 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0707 16:09:03.357206   32269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0707 16:09:03.357263   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0707 16:09:03.357269   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.357275   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.357280   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.359604   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:03.359616   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.359625   32269 round_trippers.go:580]     Audit-Id: d7c77248-1633-421c-bf05-8688256fbcc6
	I0707 16:09:03.359632   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.359639   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.359660   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.359671   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.359679   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.359926   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1100"},"items":[{"metadata":{"name":"etcd-multinode-136000","namespace":"kube-system","uid":"636b837f-c544-4688-aa2b-2f602c1546c6","resourceVersion":"1090","creationTimestamp":"2023-07-07T23:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.55:2379","kubernetes.io/config.hash":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.mirror":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.seen":"2023-07-07T23:02:20.447968150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29768 chars]
	I0707 16:09:03.360649   32269 kubeadm.go:787] kubelet initialised
	I0707 16:09:03.360658   32269 kubeadm.go:788] duration metric: took 3.442943ms waiting for restarted kubelet to initialise ...
	I0707 16:09:03.360665   32269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:03.360703   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:03.360708   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.360714   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.360721   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.363439   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:03.363448   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.363456   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.363480   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.363492   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.363499   32269 round_trippers.go:580]     Audit-Id: 6c3873fb-46ad-4235-8c4a-9de668256e71
	I0707 16:09:03.363505   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.363510   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.364961   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1100"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84962 chars]
	I0707 16:09:03.367273   32269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.367312   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:03.367317   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.367323   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.367329   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.369039   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.369052   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.369063   32269 round_trippers.go:580]     Audit-Id: 9f12d6f5-8206-4159-9f38-0abe8bdf661d
	I0707 16:09:03.369073   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.369082   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.369090   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.369099   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.369108   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.369322   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:03.369565   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.369571   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.369577   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.369584   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.371132   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.371141   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.371147   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.371152   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.371158   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.371162   32269 round_trippers.go:580]     Audit-Id: 447f2f44-5cc7-4191-8925-a6d8bb1e999f
	I0707 16:09:03.371168   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.371172   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.371321   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.371513   32269 pod_ready.go:97] node "multinode-136000" hosting pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.371522   32269 pod_ready.go:81] duration metric: took 4.238569ms waiting for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.371527   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.371533   32269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.371557   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-136000
	I0707 16:09:03.371561   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.371567   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.371573   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.372937   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.372944   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.372950   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.372954   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.372959   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.372965   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.372969   32269 round_trippers.go:580]     Audit-Id: b94d4be0-cd77-491f-8b2d-3a797f785a3a
	I0707 16:09:03.372975   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.373258   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-136000","namespace":"kube-system","uid":"636b837f-c544-4688-aa2b-2f602c1546c6","resourceVersion":"1090","creationTimestamp":"2023-07-07T23:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.55:2379","kubernetes.io/config.hash":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.mirror":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.seen":"2023-07-07T23:02:20.447968150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
	I0707 16:09:03.373463   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.373470   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.373476   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.373482   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.374725   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.374732   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.374737   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.374742   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.374746   32269 round_trippers.go:580]     Audit-Id: 0a372861-8401-416a-b28a-2693b4146ff6
	I0707 16:09:03.374751   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.374756   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.374761   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.374988   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.375155   32269 pod_ready.go:97] node "multinode-136000" hosting pod "etcd-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.375163   32269 pod_ready.go:81] duration metric: took 3.626176ms waiting for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.375168   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "etcd-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.375177   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.375204   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-136000
	I0707 16:09:03.375209   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.375214   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.375220   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.376982   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.376994   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.377002   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.377009   32269 round_trippers.go:580]     Audit-Id: 06924715-62a6-441f-9141-88242ec7a0bb
	I0707 16:09:03.377018   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.377023   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.377029   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.377034   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.377299   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-136000","namespace":"kube-system","uid":"e33f6220-5f99-43a2-adc8-49399f82e89c","resourceVersion":"1088","creationTimestamp":"2023-07-07T23:02:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.55:8443","kubernetes.io/config.hash":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.mirror":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.seen":"2023-07-07T23:02:20.447888975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7853 chars]
	I0707 16:09:03.377521   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.377527   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.377533   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.377539   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.379566   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:03.379577   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.379586   32269 round_trippers.go:580]     Audit-Id: e90f5122-858c-485b-bf73-6eebac21bf2d
	I0707 16:09:03.379594   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.379600   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.379605   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.379610   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.379615   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.379995   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.380194   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-apiserver-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.380203   32269 pod_ready.go:81] duration metric: took 5.020046ms waiting for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.380208   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-apiserver-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.380217   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.411767   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-136000
	I0707 16:09:03.411794   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.411841   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.411853   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.415852   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:03.415867   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.415875   32269 round_trippers.go:580]     Audit-Id: 20628caf-91cc-4b0e-adf6-565b561b20d0
	I0707 16:09:03.415882   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.415889   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.415896   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.415903   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.415911   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.416184   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-136000","namespace":"kube-system","uid":"a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9","resourceVersion":"1091","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.mirror":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.seen":"2023-07-07T23:02:28.360407979Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0707 16:09:03.611694   32269 request.go:628] Waited for 195.204208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.611727   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.611748   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.611759   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.611766   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.613566   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.613578   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.613584   32269 round_trippers.go:580]     Audit-Id: 24f0011a-6858-4477-a3f6-ecdc3ced2a11
	I0707 16:09:03.613589   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.613618   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.613627   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.613633   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.613638   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.613794   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.613991   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-controller-manager-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.614002   32269 pod_ready.go:81] duration metric: took 233.773711ms waiting for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.614014   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-controller-manager-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.614020   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.812491   32269 request.go:628] Waited for 198.402739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5865g
	I0707 16:09:03.812620   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5865g
	I0707 16:09:03.812632   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.812645   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.812656   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.815972   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:03.815991   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.816005   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.816013   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.816019   32269 round_trippers.go:580]     Audit-Id: 2fa3074a-76a3-4b84-bdc5-caa672c704e1
	I0707 16:09:03.816026   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.816033   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.816040   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.816186   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5865g","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b0f7832-d4d7-41e7-ab55-08284cf98427","resourceVersion":"1059","creationTimestamp":"2023-07-07T23:04:00Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0707 16:09:04.011805   32269 request.go:628] Waited for 195.171757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m03
	I0707 16:09:04.011859   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m03
	I0707 16:09:04.011869   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.011881   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.011894   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.014913   32269 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0707 16:09:04.014930   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.014939   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.014945   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.014952   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.014959   32269 round_trippers.go:580]     Content-Length: 210
	I0707 16:09:04.014967   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.014974   32269 round_trippers.go:580]     Audit-Id: 01c31788-f61b-4aaa-8367-9f1c7a777ca9
	I0707 16:09:04.014981   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.015007   32269 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-136000-m03\" not found","reason":"NotFound","details":{"name":"multinode-136000-m03","kind":"nodes"},"code":404}
	I0707 16:09:04.015163   32269 pod_ready.go:97] node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:04.015175   32269 pod_ready.go:81] duration metric: took 401.139454ms waiting for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:04.015182   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:04.015191   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:04.212662   32269 request.go:628] Waited for 197.411549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:04.212760   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:04.212770   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.212783   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.212794   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.216073   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:04.216089   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.216097   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.216103   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.216139   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.216153   32269 round_trippers.go:580]     Audit-Id: 341f654a-d4b7-4f27-9fcf-0190bfd343bd
	I0707 16:09:04.216161   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.216168   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.216275   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvrg9","generateName":"kube-proxy-","namespace":"kube-system","uid":"f7473507-c702-444e-b727-71c8a8cc4c08","resourceVersion":"936","creationTimestamp":"2023-07-07T23:03:17Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0707 16:09:04.411776   32269 request.go:628] Waited for 195.153203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:04.411828   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:04.411839   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.411889   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.411902   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.414863   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:04.414880   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.414888   32269 round_trippers.go:580]     Audit-Id: 85362d09-ed4f-4bdf-b984-2ec35681340c
	I0707 16:09:04.414895   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.414901   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.414909   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.414916   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.414930   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.415022   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000-m02","uid":"e53ac27c-579d-4edc-87f1-2f80a931d265","resourceVersion":"955","creationTimestamp":"2023-07-07T23:06:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:06:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3252 chars]
	I0707 16:09:04.415226   32269 pod_ready.go:92] pod "kube-proxy-dvrg9" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:04.415236   32269 pod_ready.go:81] duration metric: took 400.030172ms waiting for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:04.415244   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:04.613530   32269 request.go:628] Waited for 198.23456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:04.613584   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:04.613593   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.613606   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.613618   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.616532   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:04.616550   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.616558   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.616565   32269 round_trippers.go:580]     Audit-Id: cba2ae13-a23d-49e5-a805-a26ff13b5413
	I0707 16:09:04.616581   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.616589   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.616597   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.616604   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.616712   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wd4p8","generateName":"kube-proxy-","namespace":"kube-system","uid":"4979ea40-a983-4f80-b7ac-f6e05cd5f6b4","resourceVersion":"1096","creationTimestamp":"2023-07-07T23:02:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
	I0707 16:09:04.812467   32269 request.go:628] Waited for 195.386339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:04.812498   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:04.812503   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.812510   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.812515   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.814100   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:04.814116   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.814125   32269 round_trippers.go:580]     Audit-Id: 5556d0d6-ff62-4a1b-8f7c-0db11c482f7f
	I0707 16:09:04.814131   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.814136   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.814142   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.814146   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.814152   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.814292   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:04.814486   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-proxy-wd4p8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:04.814496   32269 pod_ready.go:81] duration metric: took 399.23728ms waiting for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:04.814501   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-proxy-wd4p8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:04.814508   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:05.011652   32269 request.go:628] Waited for 197.082234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:05.011684   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:05.011695   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.011703   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.011710   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.013586   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:05.013597   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.013602   32269 round_trippers.go:580]     Audit-Id: 75fb5ab7-fd6f-43ca-96e8-588b38779806
	I0707 16:09:05.013608   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.013613   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.013618   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.013623   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.013628   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.013703   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-136000","namespace":"kube-system","uid":"90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e","resourceVersion":"1089","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.mirror":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.seen":"2023-07-07T23:02:28.360408566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0707 16:09:05.213129   32269 request.go:628] Waited for 199.169502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.213240   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.213284   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.213298   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.213310   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.215952   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:05.215967   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.215975   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.215982   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.215989   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.215997   32269 round_trippers.go:580]     Audit-Id: a3afa5d0-2161-4b84-923e-681ab6734cd0
	I0707 16:09:05.216003   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.216011   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.216082   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:05.216335   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-scheduler-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:05.216350   32269 pod_ready.go:81] duration metric: took 401.828376ms waiting for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:05.216358   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-scheduler-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:05.216366   32269 pod_ready.go:38] duration metric: took 1.855652353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:05.216378   32269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0707 16:09:05.224207   32269 command_runner.go:130] > -16
	I0707 16:09:05.224319   32269 ops.go:34] apiserver oom_adj: -16
	I0707 16:09:05.224327   32269 kubeadm.go:640] restartCluster took 20.894718755s
	I0707 16:09:05.224332   32269 kubeadm.go:406] StartCluster complete in 20.914984837s
	I0707 16:09:05.224340   32269 settings.go:142] acquiring lock: {Name:mk51b97c743cd3c6fc8ca8d160602ac40ac51808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:09:05.224427   32269 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:09:05.224810   32269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16845-29196/kubeconfig: {Name:mkd0efbd118d508759ab2c0498693bc4c84ef656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:09:05.225043   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0707 16:09:05.225074   32269 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0707 16:09:05.225235   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:09:05.225436   32269 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:09:05.268934   32269 out.go:177] * Enabled addons: 
	I0707 16:09:05.290207   32269 addons.go:499] enable addons completed in 65.134152ms: enabled=[]
	I0707 16:09:05.269173   32269 kapi.go:59] client config for multinode-136000: &rest.Config{Host:"https://192.168.64.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.key", CAFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0707 16:09:05.290490   32269 round_trippers.go:463] GET https://192.168.64.55:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0707 16:09:05.290497   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.290503   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.290509   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.292422   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:05.292434   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.292441   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.292446   32269 round_trippers.go:580]     Content-Length: 292
	I0707 16:09:05.292451   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.292456   32269 round_trippers.go:580]     Audit-Id: c0d7b9d3-dfb7-453e-8970-d3260b363917
	I0707 16:09:05.292461   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.292466   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.292471   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.292485   32269 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5f7a01b1-7a53-49df-8161-430fd40f925b","resourceVersion":"1099","creationTimestamp":"2023-07-07T23:02:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0707 16:09:05.292591   32269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-136000" context rescaled to 1 replicas
	I0707 16:09:05.292610   32269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0707 16:09:05.304592   32269 command_runner.go:130] > apiVersion: v1
	I0707 16:09:05.314246   32269 command_runner.go:130] > data:
	I0707 16:09:05.314246   32269 out.go:177] * Verifying Kubernetes components...
	I0707 16:09:05.314256   32269 command_runner.go:130] >   Corefile: |
	I0707 16:09:05.314269   32269 command_runner.go:130] >     .:53 {
	I0707 16:09:05.335055   32269 command_runner.go:130] >         log
	I0707 16:09:05.335071   32269 command_runner.go:130] >         errors
	I0707 16:09:05.335077   32269 command_runner.go:130] >         health {
	I0707 16:09:05.335082   32269 command_runner.go:130] >            lameduck 5s
	I0707 16:09:05.335085   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0707 16:09:05.335086   32269 command_runner.go:130] >         }
	I0707 16:09:05.335095   32269 command_runner.go:130] >         ready
	I0707 16:09:05.335100   32269 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0707 16:09:05.335105   32269 command_runner.go:130] >            pods insecure
	I0707 16:09:05.335116   32269 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0707 16:09:05.335123   32269 command_runner.go:130] >            ttl 30
	I0707 16:09:05.335127   32269 command_runner.go:130] >         }
	I0707 16:09:05.335131   32269 command_runner.go:130] >         prometheus :9153
	I0707 16:09:05.335134   32269 command_runner.go:130] >         hosts {
	I0707 16:09:05.335138   32269 command_runner.go:130] >            192.168.64.1 host.minikube.internal
	I0707 16:09:05.335142   32269 command_runner.go:130] >            fallthrough
	I0707 16:09:05.335145   32269 command_runner.go:130] >         }
	I0707 16:09:05.335150   32269 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0707 16:09:05.335155   32269 command_runner.go:130] >            max_concurrent 1000
	I0707 16:09:05.335163   32269 command_runner.go:130] >         }
	I0707 16:09:05.335167   32269 command_runner.go:130] >         cache 30
	I0707 16:09:05.335172   32269 command_runner.go:130] >         loop
	I0707 16:09:05.335176   32269 command_runner.go:130] >         reload
	I0707 16:09:05.335180   32269 command_runner.go:130] >         loadbalance
	I0707 16:09:05.335184   32269 command_runner.go:130] >     }
	I0707 16:09:05.335187   32269 command_runner.go:130] > kind: ConfigMap
	I0707 16:09:05.335190   32269 command_runner.go:130] > metadata:
	I0707 16:09:05.335194   32269 command_runner.go:130] >   creationTimestamp: "2023-07-07T23:02:28Z"
	I0707 16:09:05.335198   32269 command_runner.go:130] >   name: coredns
	I0707 16:09:05.335201   32269 command_runner.go:130] >   namespace: kube-system
	I0707 16:09:05.335205   32269 command_runner.go:130] >   resourceVersion: "362"
	I0707 16:09:05.335209   32269 command_runner.go:130] >   uid: 871d2be9-274d-4c69-bf51-609656806846
	I0707 16:09:05.335279   32269 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0707 16:09:05.345563   32269 node_ready.go:35] waiting up to 6m0s for node "multinode-136000" to be "Ready" ...
	I0707 16:09:05.412421   32269 request.go:628] Waited for 66.7956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.412545   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.412558   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.412578   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.412590   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.415359   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:05.415375   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.415383   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.415390   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.415397   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.415404   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.415413   32269 round_trippers.go:580]     Audit-Id: 80a09fbe-872e-4b22-85ef-4eab5151afb2
	I0707 16:09:05.415419   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.415514   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:05.916579   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.916600   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.916613   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.916623   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.920140   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:05.920157   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.920165   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.920172   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.920181   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:06 GMT
	I0707 16:09:05.920189   32269 round_trippers.go:580]     Audit-Id: dfdb7d72-33cd-40cb-bef4-19fd88e22b44
	I0707 16:09:05.920195   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.920202   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.920324   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:06.417273   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:06.417289   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:06.417298   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:06.417307   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:06.419481   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:06.419490   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:06.419496   32269 round_trippers.go:580]     Audit-Id: a96553df-a429-4dd9-ac53-66f1f808da09
	I0707 16:09:06.419503   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:06.419511   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:06.419521   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:06.419532   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:06.419537   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:06 GMT
	I0707 16:09:06.419665   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:06.916475   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:06.916496   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:06.916508   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:06.916518   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:06.919559   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:06.919574   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:06.919582   32269 round_trippers.go:580]     Audit-Id: b8c8cfba-cd73-4bc8-8530-46440084c6df
	I0707 16:09:06.919589   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:06.919595   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:06.919604   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:06.919615   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:06.919626   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:07 GMT
	I0707 16:09:06.919702   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:07.417621   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:07.417643   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:07.417656   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:07.417670   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:07.420710   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:07.420727   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:07.420736   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:07 GMT
	I0707 16:09:07.420751   32269 round_trippers.go:580]     Audit-Id: 832ae0d6-9076-45bd-b1ce-8732feff964b
	I0707 16:09:07.420759   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:07.420767   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:07.420775   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:07.420781   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:07.420867   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:07.421120   32269 node_ready.go:58] node "multinode-136000" has status "Ready":"False"
	I0707 16:09:07.916898   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:07.916912   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:07.916920   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:07.916967   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:07.918333   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:07.918342   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:07.918348   32269 round_trippers.go:580]     Audit-Id: 6c3d512a-d397-4a3c-b096-8ef39478a084
	I0707 16:09:07.918353   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:07.918358   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:07.918362   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:07.918368   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:07.918374   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:08 GMT
	I0707 16:09:07.918482   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:08.417444   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:08.417469   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.417482   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.417492   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.420892   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:08.420909   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.420917   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.420925   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.420931   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.420939   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.420946   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:08 GMT
	I0707 16:09:08.420953   32269 round_trippers.go:580]     Audit-Id: 5556eee3-2d34-47a2-ad1b-790ff7f7ea55
	I0707 16:09:08.421048   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:08.917516   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:08.917539   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.917556   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.917567   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.920889   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:08.920905   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.920913   32269 round_trippers.go:580]     Audit-Id: a5103e6e-13fe-491b-8534-5d1421099f21
	I0707 16:09:08.920920   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.920926   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.920933   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.920941   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.920948   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.921055   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:08.921301   32269 node_ready.go:49] node "multinode-136000" has status "Ready":"True"
	I0707 16:09:08.921311   32269 node_ready.go:38] duration metric: took 3.575656563s waiting for node "multinode-136000" to be "Ready" ...
	I0707 16:09:08.921318   32269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:08.921360   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:08.921367   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.921375   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.921383   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.924636   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:08.924646   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.924651   32269 round_trippers.go:580]     Audit-Id: aeb6daae-7dcd-49a1-9dc8-987167f24b30
	I0707 16:09:08.924656   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.924664   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.924672   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.924679   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.924688   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.925560   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1177"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84372 chars]
	I0707 16:09:08.927373   32269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:08.927410   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:08.927415   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.927422   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.927429   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.928989   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:08.928998   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.929005   32269 round_trippers.go:580]     Audit-Id: b57bd391-cbb3-4f54-96ec-36f7e50bb1e6
	I0707 16:09:08.929013   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.929021   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.929028   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.929040   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.929045   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.929120   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:08.929342   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:08.929349   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.929355   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.929360   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.930780   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:08.930788   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.930794   32269 round_trippers.go:580]     Audit-Id: c59edf6e-b53c-4627-a3e6-9d51400d6a1e
	I0707 16:09:08.930801   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.930808   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.930815   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.930825   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.930832   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.930950   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:09.433111   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:09.433137   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.433150   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.433162   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.436313   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:09.436328   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.436336   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.436343   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.436350   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.436356   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:09.436363   32269 round_trippers.go:580]     Audit-Id: 05f79faf-1f6b-4d67-8994-b05bb3ccaa45
	I0707 16:09:09.436370   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.436517   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:09.436878   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:09.436888   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.436896   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.436903   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.438350   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:09.438359   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.438365   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.438369   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.438374   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.438379   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:09.438384   32269 round_trippers.go:580]     Audit-Id: 27be861d-b07f-43c5-a959-3899f8a8a652
	I0707 16:09:09.438389   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.438446   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:09.932452   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:09.932482   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.932538   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.932550   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.935561   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:09.935577   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.935585   32269 round_trippers.go:580]     Audit-Id: 50a05c04-1c74-4661-b3b3-5ac166dbbae3
	I0707 16:09:09.935592   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.935598   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.935604   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.935612   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.935620   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:09.935774   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:09.936139   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:09.936148   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.936156   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.936163   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.937863   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:09.937873   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.937879   32269 round_trippers.go:580]     Audit-Id: 66a9c8ae-7629-4b70-bb38-beba25c9312f
	I0707 16:09:09.937885   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.937891   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.937897   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.937902   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.937907   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:09.938015   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:10.432612   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:10.432638   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.432654   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.432665   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.435994   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:10.436010   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.436018   32269 round_trippers.go:580]     Audit-Id: 8841c67f-aec3-441d-af44-51aa13cf0655
	I0707 16:09:10.436024   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.436031   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.436037   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.436044   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.436051   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:10.436139   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:10.436494   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:10.436503   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.436512   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.436519   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.438160   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:10.438173   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.438188   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.438200   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.438205   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.438211   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:10.438218   32269 round_trippers.go:580]     Audit-Id: 7495464c-757d-431c-95d3-e306da73db4a
	I0707 16:09:10.438226   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.438300   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:10.931542   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:10.931567   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.931581   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.931592   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.934654   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:10.934672   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.934680   32269 round_trippers.go:580]     Audit-Id: 18b01687-2efa-4026-9b3c-23fe058a245b
	I0707 16:09:10.934696   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.934705   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.934711   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.934724   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.934733   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:10.934865   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:10.935223   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:10.935232   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.935241   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.935248   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.936970   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:10.936979   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.936985   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.936990   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.936995   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.937006   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.937011   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:10.937017   32269 round_trippers.go:580]     Audit-Id: 41458f0c-3721-4c17-b633-2820dd891704
	I0707 16:09:10.937066   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:10.937234   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:11.431984   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:11.432010   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.432023   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.432033   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.434991   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:11.435007   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.435016   32269 round_trippers.go:580]     Audit-Id: fef492b0-7b5d-460b-8218-c6089ddc9054
	I0707 16:09:11.435023   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.435030   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.435037   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.435044   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.435051   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:11.435134   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:11.435492   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:11.435501   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.435509   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.435517   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.437250   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:11.437259   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.437264   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:11.437269   32269 round_trippers.go:580]     Audit-Id: 7ce6b7ba-48c8-44f5-8a20-49d547046baf
	I0707 16:09:11.437275   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.437280   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.437285   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.437290   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.437538   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:11.932351   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:11.932378   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.932391   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.932402   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.935523   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:11.935541   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.935549   32269 round_trippers.go:580]     Audit-Id: c1a1d5cc-3629-47bd-9825-d61518b052aa
	I0707 16:09:11.935569   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.935576   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.935587   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.935597   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.935604   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:11.935679   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:11.936034   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:11.936043   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.936051   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.936058   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.937618   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:11.937628   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.937636   32269 round_trippers.go:580]     Audit-Id: c5142700-cf4c-4959-84ee-3e4645a9a60c
	I0707 16:09:11.937644   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.937652   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.937659   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.937664   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.937669   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:11.937868   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:12.431654   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:12.431681   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.431696   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.431747   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.434746   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:12.434765   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.434774   32269 round_trippers.go:580]     Audit-Id: ae4a2629-49db-4a84-8fae-27809d5a52fb
	I0707 16:09:12.434781   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.434787   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.434794   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.434801   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.434807   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:12.434910   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:12.435267   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:12.435276   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.435284   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.435291   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.437061   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:12.437069   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.437078   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.437086   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:12.437093   32269 round_trippers.go:580]     Audit-Id: 48abb8da-cc55-45a5-9448-cc63020ff9a0
	I0707 16:09:12.437100   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.437109   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.437118   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.437210   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:12.932501   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:12.932526   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.932539   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.932549   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.935507   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:12.935521   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.935529   32269 round_trippers.go:580]     Audit-Id: a0dfdc7d-f691-4e05-8897-98f652c1e583
	I0707 16:09:12.935536   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.935542   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.935549   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.935556   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.935564   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:12.935648   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:12.936000   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:12.936009   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.936017   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.936028   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.937567   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:12.937576   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.937581   32269 round_trippers.go:580]     Audit-Id: 48ec8c46-fcc2-46a1-a997-5013dc13270d
	I0707 16:09:12.937586   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.937592   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.937600   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.937607   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.937613   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:12.937701   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:12.937871   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:13.432501   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:13.432525   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.432537   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.432548   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.435570   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:13.435579   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.435585   32269 round_trippers.go:580]     Audit-Id: 911352b7-2cd8-4110-8bc7-135549acd44a
	I0707 16:09:13.435591   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.435596   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.435602   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.435607   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.435612   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:13.435724   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:13.436002   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:13.436009   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.436015   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.436020   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.437629   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:13.437638   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.437643   32269 round_trippers.go:580]     Audit-Id: 35fa1c30-ffb4-4e3d-99e6-8b3f4e6b8cfd
	I0707 16:09:13.437648   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.437654   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.437659   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.437679   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.437687   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:13.437882   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:13.931603   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:13.931618   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.931625   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.931631   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.934547   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:13.934558   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.934563   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.934569   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.934574   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:13.934579   32269 round_trippers.go:580]     Audit-Id: 8c3938fd-b5ae-43a7-a2f8-53654815dfb8
	I0707 16:09:13.934584   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.934589   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.936405   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:13.936687   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:13.936694   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.936700   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.936706   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.938712   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:13.938721   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.938728   32269 round_trippers.go:580]     Audit-Id: c29744ca-348e-4f19-8c6d-a3ac237cf76f
	I0707 16:09:13.938733   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.938739   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.938744   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.938749   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.938756   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:13.938954   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:14.432651   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:14.432673   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.432685   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.432696   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.436182   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:14.436199   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.436207   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.436214   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.436220   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:14.436229   32269 round_trippers.go:580]     Audit-Id: 96bb1766-d5e8-4489-9dd0-d38d7b956d85
	I0707 16:09:14.436235   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.436242   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.436604   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:14.436962   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:14.436971   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.436980   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.436992   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.438689   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:14.438698   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.438704   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.438710   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.438716   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:14.438721   32269 round_trippers.go:580]     Audit-Id: 1b8feaa0-612a-47e6-8289-51c0903debf1
	I0707 16:09:14.438726   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.438731   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.438822   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:14.931642   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:14.931673   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.931733   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.931747   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.934772   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:14.934788   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.934796   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:14.934802   32269 round_trippers.go:580]     Audit-Id: a6238cf3-29ab-4906-bcfb-9e1e12f00304
	I0707 16:09:14.934809   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.934816   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.934822   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.934830   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.934894   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:14.935245   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:14.935253   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.935261   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.935268   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.936732   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:14.936742   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.936748   32269 round_trippers.go:580]     Audit-Id: 564aa9cf-8fa0-4aef-a45c-79a27c332c56
	I0707 16:09:14.936756   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.936764   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.936770   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.936776   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.936781   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:14.936857   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:15.431433   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:15.431450   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.431459   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.431466   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.433691   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:15.433704   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.433713   32269 round_trippers.go:580]     Audit-Id: 764f8f09-d899-48da-b5ac-a7796611b65d
	I0707 16:09:15.433723   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.433732   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.433737   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.433743   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.433748   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:15.433825   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:15.434105   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:15.434112   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.434118   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.434123   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.435416   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:15.435423   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.435428   32269 round_trippers.go:580]     Audit-Id: 1ec0e114-9d51-47c2-aa2d-0b057bf236a7
	I0707 16:09:15.435432   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.435437   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.435441   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.435447   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.435453   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:15.435548   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:15.435726   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:15.931589   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:15.931614   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.931662   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.931677   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.934599   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:15.934614   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.934624   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.934634   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.934646   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.934661   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:15.934670   32269 round_trippers.go:580]     Audit-Id: 248ae3aa-ff53-4bd6-bc2b-dcba9f2f9df1
	I0707 16:09:15.934676   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.934745   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:15.935106   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:15.935115   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.935123   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.935130   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.936831   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:15.936841   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.936847   32269 round_trippers.go:580]     Audit-Id: 2ac834bd-f7b2-4dc9-8f62-463d7e5d3489
	I0707 16:09:15.936852   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.936860   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.936867   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.936871   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.936876   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:15.936943   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:16.432998   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:16.433020   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.433032   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.433042   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.436191   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:16.436206   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.436214   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.436221   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.436227   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:16.436234   32269 round_trippers.go:580]     Audit-Id: ecb7f4df-3d34-472e-99bd-f3e0dc86427f
	I0707 16:09:16.436241   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.436248   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.436316   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:16.436681   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:16.436690   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.436698   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.436705   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.438341   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:16.438350   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.438356   32269 round_trippers.go:580]     Audit-Id: 649de989-8499-49db-a07a-caaf90422dba
	I0707 16:09:16.438362   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.438367   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.438372   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.438378   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.438384   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:16.438444   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:16.931921   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:16.931948   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.931962   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.931973   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.935171   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:16.935187   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.935195   32269 round_trippers.go:580]     Audit-Id: c2b8a445-8f16-45a6-ac73-459a720f8539
	I0707 16:09:16.935202   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.935209   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.935215   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.935223   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.935229   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:16.935314   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:16.935671   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:16.935679   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.935687   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.935695   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.937057   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:16.937074   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.937083   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.937090   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:16.937096   32269 round_trippers.go:580]     Audit-Id: c0154ed3-e161-4fc3-87d7-4f78b0586987
	I0707 16:09:16.937101   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.937106   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.937112   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.937245   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:17.432699   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:17.432718   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.432727   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.432735   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.434722   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.434733   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.434739   32269 round_trippers.go:580]     Audit-Id: 0915b4ce-6357-46e4-a52f-98bef632f8f5
	I0707 16:09:17.434745   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.434750   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.434755   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.434761   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.434765   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:17.434810   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:17.435083   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:17.435089   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.435095   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.435101   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.436935   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.436945   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.436951   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.436956   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.436961   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:17.436965   32269 round_trippers.go:580]     Audit-Id: f1de9b69-c0f9-4fd1-a58d-58791b866e84
	I0707 16:09:17.436971   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.436975   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.437026   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:17.437206   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:17.931424   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:17.931440   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.931447   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.931452   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.933230   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.933243   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.933250   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.933254   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.933259   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:17.933264   32269 round_trippers.go:580]     Audit-Id: d4885c3b-0ad6-454f-bc77-620c39ffebf1
	I0707 16:09:17.933275   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.933280   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.933332   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:17.933605   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:17.933611   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.933617   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.933622   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.934975   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.934984   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.934990   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.934995   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.935004   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:17.935010   32269 round_trippers.go:580]     Audit-Id: 35d23c13-0cff-4da5-90ec-b90566a02d1f
	I0707 16:09:17.935017   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.935024   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.935169   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:18.431536   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:18.431561   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.431574   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.431584   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.435141   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:18.435157   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.435168   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.435179   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:18.435195   32269 round_trippers.go:580]     Audit-Id: 0d8f7c27-02e5-4e86-a2ae-e34db6a25ab0
	I0707 16:09:18.435208   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.435219   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.435228   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.435437   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:18.435732   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:18.435738   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.435744   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.435750   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.437192   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:18.437201   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.437207   32269 round_trippers.go:580]     Audit-Id: 32766576-5e01-4468-86fe-d463fd4040f8
	I0707 16:09:18.437212   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.437221   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.437228   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.437234   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.437239   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:18.437374   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:18.932046   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:18.932072   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.932131   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.932145   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.935241   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:18.935257   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.935265   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.935272   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:18.935278   32269 round_trippers.go:580]     Audit-Id: a39fbe14-1675-4ffb-a81e-bfff111060ad
	I0707 16:09:18.935285   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.935292   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.935301   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.935465   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:18.935822   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:18.935831   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.935839   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.935846   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.937864   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:18.937873   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.937878   32269 round_trippers.go:580]     Audit-Id: 66e10f6e-51c7-42db-866f-bd7ffe368343
	I0707 16:09:18.937884   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.937896   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.937908   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.937915   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.937924   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:18.938179   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.433520   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:19.433549   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.433561   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.433571   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.436876   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:19.436892   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.436899   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.436906   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.436914   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.436934   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.436944   32269 round_trippers.go:580]     Audit-Id: 61836a81-b6fe-4aa6-8591-6c4841dbebd6
	I0707 16:09:19.436954   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.437210   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1214","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0707 16:09:19.437577   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.437586   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.437595   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.437603   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.439711   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.439720   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.439725   32269 round_trippers.go:580]     Audit-Id: 5592298c-8923-4186-af8f-c0a8cd2c6d4c
	I0707 16:09:19.439730   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.439736   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.439740   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.439746   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.439751   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.439869   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.440069   32269 pod_ready.go:92] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.440090   32269 pod_ready.go:81] duration metric: took 10.512477733s waiting for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.440110   32269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.440136   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-136000
	I0707 16:09:19.440140   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.440146   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.440152   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.441771   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.441779   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.441784   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.441789   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.441794   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.441798   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.441803   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.441808   32269 round_trippers.go:580]     Audit-Id: 75e3b5ac-71ec-4e41-834d-6e293dba8b29
	I0707 16:09:19.441995   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-136000","namespace":"kube-system","uid":"636b837f-c544-4688-aa2b-2f602c1546c6","resourceVersion":"1178","creationTimestamp":"2023-07-07T23:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.55:2379","kubernetes.io/config.hash":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.mirror":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.seen":"2023-07-07T23:02:20.447968150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
	I0707 16:09:19.442194   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.442201   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.442206   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.442212   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.443537   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.443547   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.443556   32269 round_trippers.go:580]     Audit-Id: d39784bc-5c1b-4b72-85cd-1f3c8b625936
	I0707 16:09:19.443564   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.443570   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.443576   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.443584   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.443591   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.443712   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.443881   32269 pod_ready.go:92] pod "etcd-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.443888   32269 pod_ready.go:81] duration metric: took 3.772999ms waiting for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.443898   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.443924   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-136000
	I0707 16:09:19.443928   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.443934   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.443941   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.446052   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.446075   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.446089   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.446100   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.446115   32269 round_trippers.go:580]     Audit-Id: 99d08dbf-6a1d-4f21-b23c-7ea354c9d6b0
	I0707 16:09:19.446123   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.446128   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.446133   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.446247   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-136000","namespace":"kube-system","uid":"e33f6220-5f99-43a2-adc8-49399f82e89c","resourceVersion":"1199","creationTimestamp":"2023-07-07T23:02:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.55:8443","kubernetes.io/config.hash":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.mirror":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.seen":"2023-07-07T23:02:20.447888975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7609 chars]
	I0707 16:09:19.446578   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.446588   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.446595   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.446603   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.449003   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.449021   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.449030   32269 round_trippers.go:580]     Audit-Id: f4be0bd7-5a3c-4d27-a16b-7c6638768a5b
	I0707 16:09:19.449039   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.449046   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.449069   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.449081   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.449090   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.449474   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.449724   32269 pod_ready.go:92] pod "kube-apiserver-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.449759   32269 pod_ready.go:81] duration metric: took 5.830633ms waiting for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.449772   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.449815   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-136000
	I0707 16:09:19.449824   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.449833   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.449840   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.451637   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.451653   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.451662   32269 round_trippers.go:580]     Audit-Id: 45d78963-f0d4-4f78-b593-c5cc0bb56701
	I0707 16:09:19.451671   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.451679   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.451688   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.451697   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.451705   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.451840   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-136000","namespace":"kube-system","uid":"a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9","resourceVersion":"1184","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.mirror":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.seen":"2023-07-07T23:02:28.360407979Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0707 16:09:19.452176   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.452186   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.452195   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.452204   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.453861   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.453876   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.453884   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.453892   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.453900   32269 round_trippers.go:580]     Audit-Id: 19e52873-4bf7-45bd-af76-b73a57357cd2
	I0707 16:09:19.453908   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.453916   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.453923   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.454010   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.454234   32269 pod_ready.go:92] pod "kube-controller-manager-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.454249   32269 pod_ready.go:81] duration metric: took 4.465381ms waiting for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.454262   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.454298   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5865g
	I0707 16:09:19.454304   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.454314   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.454323   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.456165   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.456180   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.456195   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.456211   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.456226   32269 round_trippers.go:580]     Audit-Id: e5df0ccd-637e-4c21-9862-71ca42d71c70
	I0707 16:09:19.456240   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.456253   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.456264   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.456357   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5865g","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b0f7832-d4d7-41e7-ab55-08284cf98427","resourceVersion":"1059","creationTimestamp":"2023-07-07T23:04:00Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0707 16:09:19.456666   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m03
	I0707 16:09:19.456674   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.456681   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.456688   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.458334   32269 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0707 16:09:19.458345   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.458356   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.458363   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.458371   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.458379   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.458391   32269 round_trippers.go:580]     Content-Length: 210
	I0707 16:09:19.458398   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.458407   32269 round_trippers.go:580]     Audit-Id: c6049222-1fa0-4adf-9a3e-1be692a21797
	I0707 16:09:19.458420   32269 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-136000-m03\" not found","reason":"NotFound","details":{"name":"multinode-136000-m03","kind":"nodes"},"code":404}
	I0707 16:09:19.458478   32269 pod_ready.go:97] node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:19.458487   32269 pod_ready.go:81] duration metric: took 4.218638ms waiting for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:19.458493   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:19.458500   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.633890   32269 request.go:628] Waited for 175.258813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:19.633950   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:19.633961   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.633974   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.633985   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.636841   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.636878   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.636888   32269 round_trippers.go:580]     Audit-Id: f3c0ea1d-a148-4b4c-9a4f-d6e5058c361f
	I0707 16:09:19.636898   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.636906   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.636918   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.636926   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.636933   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.637088   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvrg9","generateName":"kube-proxy-","namespace":"kube-system","uid":"f7473507-c702-444e-b727-71c8a8cc4c08","resourceVersion":"936","creationTimestamp":"2023-07-07T23:03:17Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0707 16:09:19.835005   32269 request.go:628] Waited for 197.576904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:19.835132   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:19.835144   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.835157   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.835168   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.838052   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.838068   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.838076   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.838083   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.838089   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.838096   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.838103   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.838110   32269 round_trippers.go:580]     Audit-Id: 23b06935-e725-4b96-81e6-424fc0c4c00b
	I0707 16:09:19.838201   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000-m02","uid":"e53ac27c-579d-4edc-87f1-2f80a931d265","resourceVersion":"955","creationTimestamp":"2023-07-07T23:06:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:06:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3252 chars]
	I0707 16:09:19.838409   32269 pod_ready.go:92] pod "kube-proxy-dvrg9" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.838418   32269 pod_ready.go:81] duration metric: took 379.898672ms waiting for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.838428   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.034063   32269 request.go:628] Waited for 195.585239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:20.034140   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:20.034151   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.034163   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.034177   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.036997   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:20.037013   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.037023   32269 round_trippers.go:580]     Audit-Id: 9aa64d3f-45a5-4cdb-9cb6-a52526f27641
	I0707 16:09:20.037035   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.037044   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.037054   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.037062   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.037069   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.037259   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wd4p8","generateName":"kube-proxy-","namespace":"kube-system","uid":"4979ea40-a983-4f80-b7ac-f6e05cd5f6b4","resourceVersion":"1101","creationTimestamp":"2023-07-07T23:02:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0707 16:09:20.233825   32269 request.go:628] Waited for 196.195161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.233875   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.233884   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.233933   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.233948   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.237003   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:20.237026   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.237036   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.237047   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.237076   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.237088   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.237098   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.237105   32269 round_trippers.go:580]     Audit-Id: ef0a30c5-bfcf-4638-8f73-b602910b21c4
	I0707 16:09:20.237256   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:20.237505   32269 pod_ready.go:92] pod "kube-proxy-wd4p8" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:20.237516   32269 pod_ready.go:81] duration metric: took 399.073383ms waiting for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.237528   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.433720   32269 request.go:628] Waited for 196.064742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:20.433770   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:20.433779   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.433792   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.433805   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.436742   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:20.436766   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.436776   32269 round_trippers.go:580]     Audit-Id: a3bf8398-c656-4a6b-b611-4440b55f37c0
	I0707 16:09:20.436786   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.436795   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.436802   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.436809   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.436818   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.436977   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-136000","namespace":"kube-system","uid":"90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e","resourceVersion":"1197","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.mirror":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.seen":"2023-07-07T23:02:28.360408566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0707 16:09:20.635356   32269 request.go:628] Waited for 198.075675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.635433   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.635443   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.635457   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.635471   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.638427   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:20.638448   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.638460   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.638468   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.638475   32269 round_trippers.go:580]     Audit-Id: 6443f5a9-c0c8-4258-a0d2-2fa51f1d4bfe
	I0707 16:09:20.638481   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.638489   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.638495   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.638688   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:20.638944   32269 pod_ready.go:92] pod "kube-scheduler-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:20.638955   32269 pod_ready.go:81] duration metric: took 401.410417ms waiting for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.638965   32269 pod_ready.go:38] duration metric: took 11.717381449s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:20.638979   32269 api_server.go:52] waiting for apiserver process to appear ...
	I0707 16:09:20.639063   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:09:20.647618   32269 command_runner.go:130] > 1700
	I0707 16:09:20.647726   32269 api_server.go:72] duration metric: took 15.354762993s to wait for apiserver process to appear ...
	I0707 16:09:20.647734   32269 api_server.go:88] waiting for apiserver healthz status ...
	I0707 16:09:20.647743   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:20.651169   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 200:
	ok
	I0707 16:09:20.651197   32269 round_trippers.go:463] GET https://192.168.64.55:8443/version
	I0707 16:09:20.651201   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.651208   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.651214   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.651929   32269 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0707 16:09:20.651940   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.651946   32269 round_trippers.go:580]     Content-Length: 263
	I0707 16:09:20.651951   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.651957   32269 round_trippers.go:580]     Audit-Id: 4de76027-d734-4997-963b-e1d382aa8cdc
	I0707 16:09:20.651961   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.651966   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.651972   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.651976   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.651985   32269 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0707 16:09:20.652006   32269 api_server.go:141] control plane version: v1.27.3
	I0707 16:09:20.652013   32269 api_server.go:131] duration metric: took 4.275057ms to wait for apiserver health ...
	I0707 16:09:20.652017   32269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0707 16:09:20.835622   32269 request.go:628] Waited for 183.544367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:20.835721   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:20.835757   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.835770   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.835782   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.844240   32269 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0707 16:09:20.844254   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.844260   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.844294   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.844300   32269 round_trippers.go:580]     Audit-Id: 5b219ec6-46a4-48ec-9b7f-caf71bccf436
	I0707 16:09:20.844305   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.844309   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.844315   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.845669   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1214","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I0707 16:09:20.847522   32269 system_pods.go:59] 12 kube-system pods found
	I0707 16:09:20.847533   32269 system_pods.go:61] "coredns-5d78c9869d-78qmb" [d9671f13-fa08-4161-b216-53f645b9a1c1] Running
	I0707 16:09:20.847537   32269 system_pods.go:61] "etcd-multinode-136000" [636b837f-c544-4688-aa2b-2f602c1546c6] Running
	I0707 16:09:20.847540   32269 system_pods.go:61] "kindnet-gj2vg" [596c8647-685e-449c-86c0-9aeb7dddb2f5] Running
	I0707 16:09:20.847544   32269 system_pods.go:61] "kindnet-h8rpq" [30c883b3-9941-48da-a543-d1649a5418f9] Running
	I0707 16:09:20.847556   32269 system_pods.go:61] "kindnet-zpx7k" [179bc03c-a64f-48bc-9bb9-52e5c91e5037] Running
	I0707 16:09:20.847562   32269 system_pods.go:61] "kube-apiserver-multinode-136000" [e33f6220-5f99-43a2-adc8-49399f82e89c] Running
	I0707 16:09:20.847566   32269 system_pods.go:61] "kube-controller-manager-multinode-136000" [a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9] Running
	I0707 16:09:20.847570   32269 system_pods.go:61] "kube-proxy-5865g" [3b0f7832-d4d7-41e7-ab55-08284cf98427] Running
	I0707 16:09:20.847574   32269 system_pods.go:61] "kube-proxy-dvrg9" [f7473507-c702-444e-b727-71c8a8cc4c08] Running
	I0707 16:09:20.847577   32269 system_pods.go:61] "kube-proxy-wd4p8" [4979ea40-a983-4f80-b7ac-f6e05cd5f6b4] Running
	I0707 16:09:20.847581   32269 system_pods.go:61] "kube-scheduler-multinode-136000" [90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e] Running
	I0707 16:09:20.847584   32269 system_pods.go:61] "storage-provisioner" [e617383f-c16f-44a7-a1a4-a2813ecc84f2] Running
	I0707 16:09:20.847589   32269 system_pods.go:74] duration metric: took 195.563798ms to wait for pod list to return data ...
	I0707 16:09:20.847594   32269 default_sa.go:34] waiting for default service account to be created ...
	I0707 16:09:21.034242   32269 request.go:628] Waited for 186.58916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/default/serviceaccounts
	I0707 16:09:21.034366   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/default/serviceaccounts
	I0707 16:09:21.034379   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:21.034393   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:21.034404   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:21.037027   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:21.037043   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:21.037052   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:21.037059   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:21.037067   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:21.037074   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:21.037081   32269 round_trippers.go:580]     Content-Length: 262
	I0707 16:09:21.037093   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:21 GMT
	I0707 16:09:21.037101   32269 round_trippers.go:580]     Audit-Id: b3c0b5b1-8a28-4670-bd11-f77f16c1caf4
	I0707 16:09:21.037115   32269 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1219"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5aa1743c-9d67-48b0-a877-1b6e8e0c8ed0","resourceVersion":"299","creationTimestamp":"2023-07-07T23:02:40Z"}}]}
	I0707 16:09:21.037254   32269 default_sa.go:45] found service account: "default"
	I0707 16:09:21.037265   32269 default_sa.go:55] duration metric: took 189.661575ms for default service account to be created ...
	I0707 16:09:21.037272   32269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0707 16:09:21.234447   32269 request.go:628] Waited for 197.091758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:21.234545   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:21.234556   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:21.234567   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:21.234578   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:21.238683   32269 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0707 16:09:21.238693   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:21.238715   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:21.238732   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:21.238746   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:21.238756   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:21 GMT
	I0707 16:09:21.238762   32269 round_trippers.go:580]     Audit-Id: 2be8b427-461e-4901-a591-9b649e1aa7ab
	I0707 16:09:21.238771   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:21.239733   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1219"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1214","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I0707 16:09:21.242370   32269 system_pods.go:86] 12 kube-system pods found
	I0707 16:09:21.242387   32269 system_pods.go:89] "coredns-5d78c9869d-78qmb" [d9671f13-fa08-4161-b216-53f645b9a1c1] Running
	I0707 16:09:21.242392   32269 system_pods.go:89] "etcd-multinode-136000" [636b837f-c544-4688-aa2b-2f602c1546c6] Running
	I0707 16:09:21.242396   32269 system_pods.go:89] "kindnet-gj2vg" [596c8647-685e-449c-86c0-9aeb7dddb2f5] Running
	I0707 16:09:21.242400   32269 system_pods.go:89] "kindnet-h8rpq" [30c883b3-9941-48da-a543-d1649a5418f9] Running
	I0707 16:09:21.242404   32269 system_pods.go:89] "kindnet-zpx7k" [179bc03c-a64f-48bc-9bb9-52e5c91e5037] Running
	I0707 16:09:21.242408   32269 system_pods.go:89] "kube-apiserver-multinode-136000" [e33f6220-5f99-43a2-adc8-49399f82e89c] Running
	I0707 16:09:21.242412   32269 system_pods.go:89] "kube-controller-manager-multinode-136000" [a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9] Running
	I0707 16:09:21.242416   32269 system_pods.go:89] "kube-proxy-5865g" [3b0f7832-d4d7-41e7-ab55-08284cf98427] Running
	I0707 16:09:21.242420   32269 system_pods.go:89] "kube-proxy-dvrg9" [f7473507-c702-444e-b727-71c8a8cc4c08] Running
	I0707 16:09:21.242424   32269 system_pods.go:89] "kube-proxy-wd4p8" [4979ea40-a983-4f80-b7ac-f6e05cd5f6b4] Running
	I0707 16:09:21.242427   32269 system_pods.go:89] "kube-scheduler-multinode-136000" [90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e] Running
	I0707 16:09:21.242434   32269 system_pods.go:89] "storage-provisioner" [e617383f-c16f-44a7-a1a4-a2813ecc84f2] Running
	I0707 16:09:21.242438   32269 system_pods.go:126] duration metric: took 205.156757ms to wait for k8s-apps to be running ...
	I0707 16:09:21.242443   32269 system_svc.go:44] waiting for kubelet service to be running ....
	I0707 16:09:21.242494   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0707 16:09:21.251138   32269 system_svc.go:56] duration metric: took 8.690199ms WaitForService to wait for kubelet.
	I0707 16:09:21.251150   32269 kubeadm.go:581] duration metric: took 15.958174317s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0707 16:09:21.251176   32269 node_conditions.go:102] verifying NodePressure condition ...
	I0707 16:09:21.433681   32269 request.go:628] Waited for 182.449596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes
	I0707 16:09:21.433763   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes
	I0707 16:09:21.433774   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:21.433786   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:21.433800   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:21.436791   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:21.436807   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:21.436822   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:21.436831   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:21.436838   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:21 GMT
	I0707 16:09:21.436844   32269 round_trippers.go:580]     Audit-Id: bf79a6fc-dad2-4cfe-baa5-67e5bcb57fbc
	I0707 16:09:21.436852   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:21.436859   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:21.437146   32269 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1220"},"items":[{"metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9472 chars]
	I0707 16:09:21.437531   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:21.437544   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:21.437552   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:21.437557   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:21.437562   32269 node_conditions.go:105] duration metric: took 186.376949ms to run NodePressure ...
	I0707 16:09:21.437571   32269 start.go:228] waiting for startup goroutines ...
	I0707 16:09:21.437579   32269 start.go:233] waiting for cluster config update ...
	I0707 16:09:21.437586   32269 start.go:242] writing updated cluster config ...
	I0707 16:09:21.438319   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:09:21.438414   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:09:21.482320   32269 out.go:177] * Starting worker node multinode-136000-m02 in cluster multinode-136000
	I0707 16:09:21.503950   32269 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 16:09:21.503973   32269 cache.go:57] Caching tarball of preloaded images
	I0707 16:09:21.504122   32269 preload.go:174] Found /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0707 16:09:21.504131   32269 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0707 16:09:21.504216   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:09:21.504736   32269 start.go:365] acquiring machines lock for multinode-136000-m02: {Name:mk81f6152b3f423bf222fad0025fe3c8ddb3ea12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0707 16:09:21.504789   32269 start.go:369] acquired machines lock for "multinode-136000-m02" in 39.658µs
	I0707 16:09:21.504811   32269 start.go:96] Skipping create...Using existing machine configuration
	I0707 16:09:21.504815   32269 fix.go:54] fixHost starting: m02
	I0707 16:09:21.505123   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:09:21.505136   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:09:21.512234   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49243
	I0707 16:09:21.512566   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:09:21.512972   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:09:21.512995   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:09:21.513198   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:09:21.513330   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:21.513429   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetState
	I0707 16:09:21.513504   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:09:21.513576   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid from json: 32151
	I0707 16:09:21.514541   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid 32151 missing from process table
	I0707 16:09:21.514568   32269 fix.go:102] recreateIfNeeded on multinode-136000-m02: state=Stopped err=<nil>
	I0707 16:09:21.514581   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	W0707 16:09:21.514666   32269 fix.go:128] unexpected machine state, will restart: <nil>
	I0707 16:09:21.537173   32269 out.go:177] * Restarting existing hyperkit VM for "multinode-136000-m02" ...
	I0707 16:09:21.579228   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .Start
	I0707 16:09:21.579480   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:09:21.579562   32269 main.go:141] libmachine: (multinode-136000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid
	I0707 16:09:21.581299   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid 32151 missing from process table
	I0707 16:09:21.581312   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | pid 32151 is in state "Stopped"
	I0707 16:09:21.581331   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid...
	I0707 16:09:21.581521   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Using UUID 671813f0-1d1a-11ee-8196-149d997f80ea
	I0707 16:09:21.611201   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Generated MAC b2:4b:8:0:c2:14
	I0707 16:09:21.611226   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000
	I0707 16:09:21.611360   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"671813f0-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004e8930)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLin
e:"", process:(*os.Process)(nil)}
	I0707 16:09:21.611389   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"671813f0-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004e8930)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLin
e:"", process:(*os.Process)(nil)}
	I0707 16:09:21.611509   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "671813f0-1d1a-11ee-8196-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/multinode-136000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage,/U
sers/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"}
	I0707 16:09:21.611562   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 671813f0-1d1a-11ee-8196-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/multinode-136000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage,/Users/jenkins/minikube-integration/16845-29196/.minikube/machin
es/multinode-136000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"
	I0707 16:09:21.611576   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0707 16:09:21.612846   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Pid is 32313
	I0707 16:09:21.613206   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Attempt 0
	I0707 16:09:21.613225   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:09:21.613270   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid from json: 32313
	I0707 16:09:21.614973   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Searching for b2:4b:8:0:c2:14 in /var/db/dhcpd_leases ...
	I0707 16:09:21.615055   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Found 56 entries in /var/db/dhcpd_leases!
	I0707 16:09:21.615071   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.55 HWAddress:66:77:10:3:27:1c ID:1,66:77:10:3:27:1c Lease:0x64a9ec75}
	I0707 16:09:21.615078   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.57 HWAddress:e2:5d:8d:f1:83:3b ID:1,e2:5d:8d:f1:83:3b Lease:0x64a89ada}
	I0707 16:09:21.615090   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.56 HWAddress:b2:4b:8:0:c2:14 ID:1,b2:4b:8:0:c2:14 Lease:0x64a9ebeb}
	I0707 16:09:21.615101   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Found match: b2:4b:8:0:c2:14
	I0707 16:09:21.615110   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | IP: 192.168.64.56
	I0707 16:09:21.615130   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetConfigRaw
	I0707 16:09:21.615654   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetIP
	I0707 16:09:21.615844   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:09:21.616134   32269 machine.go:88] provisioning docker machine ...
	I0707 16:09:21.616144   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:21.616252   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetMachineName
	I0707 16:09:21.616335   32269 buildroot.go:166] provisioning hostname "multinode-136000-m02"
	I0707 16:09:21.616347   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetMachineName
	I0707 16:09:21.616427   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:21.616527   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:21.616608   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:21.616692   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:21.616801   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:21.616940   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:21.617269   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:21.617281   32269 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-136000-m02 && echo "multinode-136000-m02" | sudo tee /etc/hostname
	I0707 16:09:21.619337   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0707 16:09:21.627200   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0707 16:09:21.628054   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:09:21.628070   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:09:21.628080   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:09:21.628093   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:09:21.993338   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0707 16:09:21.993353   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0707 16:09:22.097423   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:09:22.097444   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:09:22.097455   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:09:22.097465   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:09:22.098331   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0707 16:09:22.098340   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0707 16:09:26.921488   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0707 16:09:26.921567   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0707 16:09:26.921583   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0707 16:09:56.714714   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-136000-m02
	
	I0707 16:09:56.714729   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.714866   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:56.714965   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.715045   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.715146   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:56.715297   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:56.715609   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:56.715621   32269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-136000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-136000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-136000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0707 16:09:56.796953   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0707 16:09:56.796979   32269 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16845-29196/.minikube CaCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16845-29196/.minikube}
	I0707 16:09:56.796991   32269 buildroot.go:174] setting up certificates
	I0707 16:09:56.796999   32269 provision.go:83] configureAuth start
	I0707 16:09:56.797006   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetMachineName
	I0707 16:09:56.797147   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetIP
	I0707 16:09:56.797238   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.797325   32269 provision.go:138] copyHostCerts
	I0707 16:09:56.797370   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:09:56.797424   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem, removing ...
	I0707 16:09:56.797429   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:09:56.797544   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem (1082 bytes)
	I0707 16:09:56.797719   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:09:56.797761   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem, removing ...
	I0707 16:09:56.797766   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:09:56.797831   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem (1123 bytes)
	I0707 16:09:56.797963   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:09:56.798005   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem, removing ...
	I0707 16:09:56.798010   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:09:56.798080   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem (1675 bytes)
	I0707 16:09:56.798210   32269 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem org=jenkins.multinode-136000-m02 san=[192.168.64.56 192.168.64.56 localhost 127.0.0.1 minikube multinode-136000-m02]
	I0707 16:09:56.873950   32269 provision.go:172] copyRemoteCerts
	I0707 16:09:56.874008   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0707 16:09:56.874025   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.874169   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:56.874261   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.874358   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:56.874448   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:56.917647   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0707 16:09:56.917716   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0707 16:09:56.933654   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0707 16:09:56.933710   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0707 16:09:56.949646   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0707 16:09:56.949702   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0707 16:09:56.965625   32269 provision.go:86] duration metric: configureAuth took 168.616012ms
	I0707 16:09:56.965635   32269 buildroot.go:189] setting minikube options for container-runtime
	I0707 16:09:56.965811   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:09:56.965826   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:56.965954   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.966037   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:56.966131   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.966217   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.966294   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:56.966416   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:56.966705   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:56.966713   32269 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0707 16:09:57.042075   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0707 16:09:57.042092   32269 buildroot.go:70] root file system type: tmpfs
	I0707 16:09:57.042186   32269 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0707 16:09:57.042200   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.042343   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.042436   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.042512   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.042592   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.042728   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:57.043044   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:57.043091   32269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.64.55"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0707 16:09:57.127435   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.64.55
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0707 16:09:57.127452   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.127588   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.127700   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.127789   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.127877   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.128013   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:57.128319   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:57.128333   32269 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0707 16:09:57.690253   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0707 16:09:57.690267   32269 machine.go:91] provisioned docker machine in 36.073333253s
	I0707 16:09:57.690274   32269 start.go:300] post-start starting for "multinode-136000-m02" (driver="hyperkit")
	I0707 16:09:57.690281   32269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0707 16:09:57.690314   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.690500   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0707 16:09:57.690520   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.690613   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.690697   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.690781   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.690857   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:57.734283   32269 ssh_runner.go:195] Run: cat /etc/os-release
	I0707 16:09:57.736846   32269 command_runner.go:130] > NAME=Buildroot
	I0707 16:09:57.736859   32269 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0707 16:09:57.736863   32269 command_runner.go:130] > ID=buildroot
	I0707 16:09:57.736868   32269 command_runner.go:130] > VERSION_ID=2021.02.12
	I0707 16:09:57.736885   32269 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0707 16:09:57.736979   32269 info.go:137] Remote host: Buildroot 2021.02.12
	I0707 16:09:57.736989   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/addons for local assets ...
	I0707 16:09:57.737071   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/files for local assets ...
	I0707 16:09:57.737245   32269 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> 296432.pem in /etc/ssl/certs
	I0707 16:09:57.737250   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> /etc/ssl/certs/296432.pem
	I0707 16:09:57.737432   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0707 16:09:57.743102   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem --> /etc/ssl/certs/296432.pem (1708 bytes)
	I0707 16:09:57.759197   32269 start.go:303] post-start completed in 68.913748ms
	I0707 16:09:57.759208   32269 fix.go:56] fixHost completed within 36.253597016s
	I0707 16:09:57.759222   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.759352   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.759474   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.759564   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.759651   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.759766   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:57.760064   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:57.760073   32269 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0707 16:09:57.834660   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688771397.920614680
	
	I0707 16:09:57.834672   32269 fix.go:206] guest clock: 1688771397.920614680
	I0707 16:09:57.834677   32269 fix.go:219] Guest: 2023-07-07 16:09:57.92061468 -0700 PDT Remote: 2023-07-07 16:09:57.759213 -0700 PDT m=+89.602198557 (delta=161.40168ms)
	I0707 16:09:57.834687   32269 fix.go:190] guest clock delta is within tolerance: 161.40168ms
	I0707 16:09:57.834691   32269 start.go:83] releasing machines lock for "multinode-136000-m02", held for 36.32909835s
	I0707 16:09:57.834715   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.834848   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetIP
	I0707 16:09:57.858213   32269 out.go:177] * Found network options:
	I0707 16:09:57.880446   32269 out.go:177]   - NO_PROXY=192.168.64.55
	W0707 16:09:57.902337   32269 proxy.go:119] fail to check proxy env: Error ip not in block
	I0707 16:09:57.902382   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.903199   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.903460   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.903625   32269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0707 16:09:57.903663   32269 proxy.go:119] fail to check proxy env: Error ip not in block
	I0707 16:09:57.903688   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.903817   32269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0707 16:09:57.903847   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.903920   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.904056   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.904124   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.904245   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.904261   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.904378   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.904402   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:57.904500   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:57.945647   32269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0707 16:09:57.945792   32269 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0707 16:09:57.945862   32269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0707 16:09:57.989041   32269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0707 16:09:57.989094   32269 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0707 16:09:57.989120   32269 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0707 16:09:57.989133   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:09:57.989247   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:09:58.002396   32269 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0707 16:09:58.002465   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0707 16:09:58.009560   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0707 16:09:58.016560   32269 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0707 16:09:58.016607   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0707 16:09:58.023558   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:09:58.030474   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0707 16:09:58.037312   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:09:58.044416   32269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0707 16:09:58.051665   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0707 16:09:58.058551   32269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0707 16:09:58.064706   32269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0707 16:09:58.064873   32269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0707 16:09:58.071105   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:09:58.165592   32269 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0707 16:09:58.177684   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:09:58.177751   32269 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0707 16:09:58.186600   32269 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0707 16:09:58.187212   32269 command_runner.go:130] > [Unit]
	I0707 16:09:58.187240   32269 command_runner.go:130] > Description=Docker Application Container Engine
	I0707 16:09:58.187245   32269 command_runner.go:130] > Documentation=https://docs.docker.com
	I0707 16:09:58.187252   32269 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0707 16:09:58.187259   32269 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0707 16:09:58.187267   32269 command_runner.go:130] > StartLimitBurst=3
	I0707 16:09:58.187271   32269 command_runner.go:130] > StartLimitIntervalSec=60
	I0707 16:09:58.187275   32269 command_runner.go:130] > [Service]
	I0707 16:09:58.187321   32269 command_runner.go:130] > Type=notify
	I0707 16:09:58.187326   32269 command_runner.go:130] > Restart=on-failure
	I0707 16:09:58.187330   32269 command_runner.go:130] > Environment=NO_PROXY=192.168.64.55
	I0707 16:09:58.187336   32269 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0707 16:09:58.187348   32269 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0707 16:09:58.187368   32269 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0707 16:09:58.187394   32269 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0707 16:09:58.187400   32269 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0707 16:09:58.187407   32269 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0707 16:09:58.187414   32269 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0707 16:09:58.187425   32269 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0707 16:09:58.187431   32269 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0707 16:09:58.187434   32269 command_runner.go:130] > ExecStart=
	I0707 16:09:58.187446   32269 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0707 16:09:58.187452   32269 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0707 16:09:58.187459   32269 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0707 16:09:58.187465   32269 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0707 16:09:58.187490   32269 command_runner.go:130] > LimitNOFILE=infinity
	I0707 16:09:58.187516   32269 command_runner.go:130] > LimitNPROC=infinity
	I0707 16:09:58.187521   32269 command_runner.go:130] > LimitCORE=infinity
	I0707 16:09:58.187529   32269 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0707 16:09:58.187536   32269 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0707 16:09:58.187541   32269 command_runner.go:130] > TasksMax=infinity
	I0707 16:09:58.187561   32269 command_runner.go:130] > TimeoutStartSec=0
	I0707 16:09:58.187589   32269 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0707 16:09:58.187595   32269 command_runner.go:130] > Delegate=yes
	I0707 16:09:58.187601   32269 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0707 16:09:58.187608   32269 command_runner.go:130] > KillMode=process
	I0707 16:09:58.187613   32269 command_runner.go:130] > [Install]
	I0707 16:09:58.187616   32269 command_runner.go:130] > WantedBy=multi-user.target
	I0707 16:09:58.187820   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:09:58.198397   32269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0707 16:09:58.229497   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:09:58.238537   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:09:58.247320   32269 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0707 16:09:58.268907   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:09:58.278123   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:09:58.290354   32269 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0707 16:09:58.290771   32269 ssh_runner.go:195] Run: which cri-dockerd
	I0707 16:09:58.292879   32269 command_runner.go:130] > /usr/bin/cri-dockerd
	I0707 16:09:58.293077   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0707 16:09:58.299024   32269 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0707 16:09:58.309756   32269 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0707 16:09:58.389655   32269 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0707 16:09:58.477748   32269 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0707 16:09:58.477764   32269 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0707 16:09:58.489100   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:09:58.577474   32269 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0707 16:10:59.622276   32269 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0707 16:10:59.622290   32269 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0707 16:10:59.622326   32269 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.043496052s)
	I0707 16:10:59.644709   32269 out.go:177] 
	W0707 16:10:59.665571   32269 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0707 16:10:59.665610   32269 out.go:239] * 
	* 
	W0707 16:10:59.666822   32269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0707 16:10:59.710532   32269 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-136000 --wait=true -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-136000 -n multinode-136000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-136000 logs -n 25: (2.741052761s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-136000 cp multinode-136000-m02:/home/docker/cp-test.txt                                                           | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000:/home/docker/cp-test_multinode-136000-m02_multinode-136000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n                                                                                                     | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n multinode-136000 sudo cat                                                                           | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | /home/docker/cp-test_multinode-136000-m02_multinode-136000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-136000 cp multinode-136000-m02:/home/docker/cp-test.txt                                                           | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m03:/home/docker/cp-test_multinode-136000-m02_multinode-136000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n                                                                                                     | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n multinode-136000-m03 sudo cat                                                                       | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | /home/docker/cp-test_multinode-136000-m02_multinode-136000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-136000 cp testdata/cp-test.txt                                                                                    | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n                                                                                                     | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-136000 cp multinode-136000-m03:/home/docker/cp-test.txt                                                           | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1161446772/001/cp-test_multinode-136000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n                                                                                                     | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-136000 cp multinode-136000-m03:/home/docker/cp-test.txt                                                           | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000:/home/docker/cp-test_multinode-136000-m03_multinode-136000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n                                                                                                     | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n multinode-136000 sudo cat                                                                           | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | /home/docker/cp-test_multinode-136000-m03_multinode-136000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-136000 cp multinode-136000-m03:/home/docker/cp-test.txt                                                           | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m02:/home/docker/cp-test_multinode-136000-m03_multinode-136000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n                                                                                                     | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | multinode-136000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-136000 ssh -n multinode-136000-m02 sudo cat                                                                       | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | /home/docker/cp-test_multinode-136000-m03_multinode-136000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-136000 node stop m03                                                                                              | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	| node    | multinode-136000 node start                                                                                                 | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:04 PDT |
	|         | m03 --alsologtostderr                                                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-136000                                                                                                    | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT |                     |
	| stop    | -p multinode-136000                                                                                                         | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:04 PDT | 07 Jul 23 16:05 PDT |
	| start   | -p multinode-136000                                                                                                         | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:05 PDT | 07 Jul 23 16:08 PDT |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-136000                                                                                                    | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:08 PDT |                     |
	| node    | multinode-136000 node delete                                                                                                | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:08 PDT | 07 Jul 23 16:08 PDT |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-136000 stop                                                                                                       | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:08 PDT | 07 Jul 23 16:08 PDT |
	| start   | -p multinode-136000                                                                                                         | multinode-136000 | jenkins | v1.30.1 | 07 Jul 23 16:08 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/07 16:08:28
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0707 16:08:28.188680   32269 out.go:296] Setting OutFile to fd 1 ...
	I0707 16:08:28.188843   32269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:08:28.188849   32269 out.go:309] Setting ErrFile to fd 2...
	I0707 16:08:28.188853   32269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:08:28.188964   32269 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	I0707 16:08:28.190438   32269 out.go:303] Setting JSON to false
	I0707 16:08:28.209923   32269 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11273,"bootTime":1688760035,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0707 16:08:28.210029   32269 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0707 16:08:28.232171   32269 out.go:177] * [multinode-136000] minikube v1.30.1 on Darwin 13.4.1
	I0707 16:08:28.274831   32269 out.go:177]   - MINIKUBE_LOCATION=16845
	I0707 16:08:28.274890   32269 notify.go:220] Checking for updates...
	I0707 16:08:28.318692   32269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:08:28.339737   32269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0707 16:08:28.381733   32269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0707 16:08:28.402814   32269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	I0707 16:08:28.444868   32269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0707 16:08:28.466642   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:08:28.467309   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.467392   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.475111   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49214
	I0707 16:08:28.475452   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.475891   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.475912   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.476164   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.476281   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.476455   32269 driver.go:373] Setting default libvirt URI to qemu:///system
	I0707 16:08:28.476700   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.476728   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.483479   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49216
	I0707 16:08:28.483793   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.484134   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.484149   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.484378   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.484479   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.511776   32269 out.go:177] * Using the hyperkit driver based on existing profile
	I0707 16:08:28.553740   32269 start.go:297] selected driver: hyperkit
	I0707 16:08:28.553760   32269 start.go:944] validating driver "hyperkit" against &{Name:multinode-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.56 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 16:08:28.553937   32269 start.go:955] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0707 16:08:28.554110   32269 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 16:08:28.554277   32269 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16845-29196/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0707 16:08:28.562351   32269 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0707 16:08:28.565856   32269 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.565878   32269 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0707 16:08:28.568178   32269 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0707 16:08:28.568229   32269 cni.go:84] Creating CNI manager for ""
	I0707 16:08:28.568237   32269 cni.go:137] 2 nodes found, recommending kindnet
	I0707 16:08:28.568261   32269 start_flags.go:319] config:
	{Name:multinode-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-136000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.56 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-dri
ver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 16:08:28.568507   32269 iso.go:125] acquiring lock: {Name:mkc26c030f62bdf6e3ab619c68665518d3e66b24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 16:08:28.610755   32269 out.go:177] * Starting control plane node multinode-136000 in cluster multinode-136000
	I0707 16:08:28.631646   32269 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 16:08:28.631696   32269 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0707 16:08:28.631714   32269 cache.go:57] Caching tarball of preloaded images
	I0707 16:08:28.631805   32269 preload.go:174] Found /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0707 16:08:28.631814   32269 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0707 16:08:28.631920   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:08:28.632354   32269 start.go:365] acquiring machines lock for multinode-136000: {Name:mk81f6152b3f423bf222fad0025fe3c8ddb3ea12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0707 16:08:28.632405   32269 start.go:369] acquired machines lock for "multinode-136000" in 39.211µs
	I0707 16:08:28.632428   32269 start.go:96] Skipping create...Using existing machine configuration
	I0707 16:08:28.632436   32269 fix.go:54] fixHost starting: 
	I0707 16:08:28.632660   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.632683   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.639995   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49218
	I0707 16:08:28.640345   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.640723   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.640736   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.640968   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.641082   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.641168   32269 main.go:141] libmachine: (multinode-136000) Calling .GetState
	I0707 16:08:28.641249   32269 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.641304   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid from json: 32119
	I0707 16:08:28.642238   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid 32119 missing from process table
	I0707 16:08:28.642280   32269 fix.go:102] recreateIfNeeded on multinode-136000: state=Stopped err=<nil>
	I0707 16:08:28.642301   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	W0707 16:08:28.642399   32269 fix.go:128] unexpected machine state, will restart: <nil>
	I0707 16:08:28.684727   32269 out.go:177] * Restarting existing hyperkit VM for "multinode-136000" ...
	I0707 16:08:28.705745   32269 main.go:141] libmachine: (multinode-136000) Calling .Start
	I0707 16:08:28.706017   32269 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.706087   32269 main.go:141] libmachine: (multinode-136000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid
	I0707 16:08:28.707885   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid 32119 missing from process table
	I0707 16:08:28.707904   32269 main.go:141] libmachine: (multinode-136000) DBG | pid 32119 is in state "Stopped"
	I0707 16:08:28.707933   32269 main.go:141] libmachine: (multinode-136000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid...
	I0707 16:08:28.708191   32269 main.go:141] libmachine: (multinode-136000) DBG | Using UUID 4429c2bc-1d1a-11ee-8196-149d997f80ea
	I0707 16:08:28.828161   32269 main.go:141] libmachine: (multinode-136000) DBG | Generated MAC 66:77:10:3:27:1c
	I0707 16:08:28.828183   32269 main.go:141] libmachine: (multinode-136000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000
	I0707 16:08:28.828311   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4429c2bc-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000436390)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.P
rocess)(nil)}
	I0707 16:08:28.828352   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4429c2bc-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000436390)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.P
rocess)(nil)}
	I0707 16:08:28.828416   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4429c2bc-1d1a-11ee-8196-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/multinode-136000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage,/Users/jenkins/minikube-integratio
n/16845-29196/.minikube/machines/multinode-136000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"}
	I0707 16:08:28.828449   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4429c2bc-1d1a-11ee-8196-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/multinode-136000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/console-ring -f kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/bzimage,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/initrd,early
printk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"
	I0707 16:08:28.828458   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0707 16:08:28.829937   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 DEBUG: hyperkit: Pid is 32285
	I0707 16:08:28.830493   32269 main.go:141] libmachine: (multinode-136000) DBG | Attempt 0
	I0707 16:08:28.830539   32269 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.830596   32269 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid from json: 32285
	I0707 16:08:28.832320   32269 main.go:141] libmachine: (multinode-136000) DBG | Searching for 66:77:10:3:27:1c in /var/db/dhcpd_leases ...
	I0707 16:08:28.832438   32269 main.go:141] libmachine: (multinode-136000) DBG | Found 56 entries in /var/db/dhcpd_leases!
	I0707 16:08:28.832456   32269 main.go:141] libmachine: (multinode-136000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.57 HWAddress:e2:5d:8d:f1:83:3b ID:1,e2:5d:8d:f1:83:3b Lease:0x64a89ada}
	I0707 16:08:28.832470   32269 main.go:141] libmachine: (multinode-136000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.56 HWAddress:b2:4b:8:0:c2:14 ID:1,b2:4b:8:0:c2:14 Lease:0x64a9ebeb}
	I0707 16:08:28.832483   32269 main.go:141] libmachine: (multinode-136000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.55 HWAddress:66:77:10:3:27:1c ID:1,66:77:10:3:27:1c Lease:0x64a9ebb4}
	I0707 16:08:28.832495   32269 main.go:141] libmachine: (multinode-136000) DBG | Found match: 66:77:10:3:27:1c
	I0707 16:08:28.832505   32269 main.go:141] libmachine: (multinode-136000) DBG | IP: 192.168.64.55
	I0707 16:08:28.832558   32269 main.go:141] libmachine: (multinode-136000) Calling .GetConfigRaw
	I0707 16:08:28.833139   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:28.833339   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:08:28.833663   32269 machine.go:88] provisioning docker machine ...
	I0707 16:08:28.833681   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:28.833813   32269 main.go:141] libmachine: (multinode-136000) Calling .GetMachineName
	I0707 16:08:28.833937   32269 buildroot.go:166] provisioning hostname "multinode-136000"
	I0707 16:08:28.833953   32269 main.go:141] libmachine: (multinode-136000) Calling .GetMachineName
	I0707 16:08:28.834085   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:28.834208   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:28.834346   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:28.834466   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:28.834580   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:28.834730   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:28.835111   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:28.835122   32269 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-136000 && echo "multinode-136000" | sudo tee /etc/hostname
	I0707 16:08:28.837192   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0707 16:08:28.895034   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0707 16:08:28.895832   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:08:28.895876   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:08:28.895912   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:08:28.895933   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:08:29.260221   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0707 16:08:29.260237   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0707 16:08:29.364352   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:08:29.364373   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:08:29.364398   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:08:29.364414   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:08:29.365281   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0707 16:08:29.365290   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0707 16:08:34.206083   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0707 16:08:34.206142   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0707 16:08:34.206151   32269 main.go:141] libmachine: (multinode-136000) DBG | 2023/07/07 16:08:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0707 16:08:39.922084   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-136000
	
	I0707 16:08:39.922101   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:39.922230   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:39.922327   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:39.922420   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:39.922528   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:39.922696   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:39.923044   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:39.923058   32269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-136000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-136000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-136000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0707 16:08:39.993711   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0707 16:08:39.993729   32269 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16845-29196/.minikube CaCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16845-29196/.minikube}
	I0707 16:08:39.993749   32269 buildroot.go:174] setting up certificates
	I0707 16:08:39.993759   32269 provision.go:83] configureAuth start
	I0707 16:08:39.993766   32269 main.go:141] libmachine: (multinode-136000) Calling .GetMachineName
	I0707 16:08:39.993902   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:39.994001   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:39.994098   32269 provision.go:138] copyHostCerts
	I0707 16:08:39.994156   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:08:39.994215   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem, removing ...
	I0707 16:08:39.994222   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:08:39.994327   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem (1082 bytes)
	I0707 16:08:39.994516   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:08:39.994559   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem, removing ...
	I0707 16:08:39.994568   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:08:39.994636   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem (1123 bytes)
	I0707 16:08:39.994776   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:08:39.994817   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem, removing ...
	I0707 16:08:39.994821   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:08:39.994879   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem (1675 bytes)
	I0707 16:08:39.995013   32269 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem org=jenkins.multinode-136000 san=[192.168.64.55 192.168.64.55 localhost 127.0.0.1 minikube multinode-136000]
	I0707 16:08:40.207508   32269 provision.go:172] copyRemoteCerts
	I0707 16:08:40.207604   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0707 16:08:40.207620   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.207834   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.208011   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.208259   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.208434   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:40.247859   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0707 16:08:40.247966   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0707 16:08:40.264154   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0707 16:08:40.264216   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0707 16:08:40.279893   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0707 16:08:40.279989   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0707 16:08:40.295907   32269 provision.go:86] duration metric: configureAuth took 302.129917ms
	I0707 16:08:40.295919   32269 buildroot.go:189] setting minikube options for container-runtime
	I0707 16:08:40.296111   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:08:40.296152   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:40.296285   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.296381   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.296512   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.296587   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.296673   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.296780   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:40.297070   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:40.297078   32269 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0707 16:08:40.365704   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0707 16:08:40.365722   32269 buildroot.go:70] root file system type: tmpfs
	I0707 16:08:40.365782   32269 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0707 16:08:40.365795   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.365936   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.366043   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.366172   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.366278   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.366435   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:40.366733   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:40.366778   32269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0707 16:08:40.440549   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0707 16:08:40.440580   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:40.440718   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:40.440818   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.440915   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:40.441006   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:40.441156   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:40.441471   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:40.441490   32269 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0707 16:08:41.161548   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0707 16:08:41.161562   32269 machine.go:91] provisioned docker machine in 12.327619344s
	I0707 16:08:41.161572   32269 start.go:300] post-start starting for "multinode-136000" (driver="hyperkit")
	I0707 16:08:41.161584   32269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0707 16:08:41.161598   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.161796   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0707 16:08:41.161812   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.161915   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.162008   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.162091   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.162171   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:41.201693   32269 ssh_runner.go:195] Run: cat /etc/os-release
	I0707 16:08:41.204061   32269 command_runner.go:130] > NAME=Buildroot
	I0707 16:08:41.204070   32269 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0707 16:08:41.204076   32269 command_runner.go:130] > ID=buildroot
	I0707 16:08:41.204081   32269 command_runner.go:130] > VERSION_ID=2021.02.12
	I0707 16:08:41.204088   32269 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0707 16:08:41.204287   32269 info.go:137] Remote host: Buildroot 2021.02.12
	I0707 16:08:41.204298   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/addons for local assets ...
	I0707 16:08:41.204379   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/files for local assets ...
	I0707 16:08:41.204549   32269 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> 296432.pem in /etc/ssl/certs
	I0707 16:08:41.204556   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> /etc/ssl/certs/296432.pem
	I0707 16:08:41.204730   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0707 16:08:41.210831   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem --> /etc/ssl/certs/296432.pem (1708 bytes)
	I0707 16:08:41.226283   32269 start.go:303] post-start completed in 64.701658ms
	I0707 16:08:41.226297   32269 fix.go:56] fixHost completed within 12.593586593s
	I0707 16:08:41.226314   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.226440   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.226522   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.226615   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.226699   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.226825   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:08:41.227129   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.55 22 <nil> <nil>}
	I0707 16:08:41.227137   32269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0707 16:08:41.292064   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688771321.128629285
	
	I0707 16:08:41.292076   32269 fix.go:206] guest clock: 1688771321.128629285
	I0707 16:08:41.292081   32269 fix.go:219] Guest: 2023-07-07 16:08:41.128629285 -0700 PDT Remote: 2023-07-07 16:08:41.2263 -0700 PDT m=+13.070964927 (delta=-97.670715ms)
	I0707 16:08:41.292099   32269 fix.go:190] guest clock delta is within tolerance: -97.670715ms
	I0707 16:08:41.292103   32269 start.go:83] releasing machines lock for "multinode-136000", held for 12.659414156s
	I0707 16:08:41.292119   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292240   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:41.292332   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292655   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292786   32269 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:08:41.292873   32269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0707 16:08:41.292907   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.292927   32269 ssh_runner.go:195] Run: cat /version.json
	I0707 16:08:41.292938   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:08:41.293044   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.293054   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:08:41.293156   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.293169   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:08:41.293245   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.293262   32269 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:08:41.293327   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:41.293356   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:08:41.370650   32269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0707 16:08:41.371560   32269 command_runner.go:130] > {"iso_version": "v1.30.1-1688144767-16765", "kicbase_version": "v0.0.39-1687538068-16731", "minikube_version": "v1.30.1", "commit": "ea1fcc3c7b384862404a5ec9a04bec1496959f9b"}
	I0707 16:08:41.371683   32269 ssh_runner.go:195] Run: systemctl --version
	I0707 16:08:41.375584   32269 command_runner.go:130] > systemd 247 (247)
	I0707 16:08:41.375601   32269 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0707 16:08:41.375907   32269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0707 16:08:41.379320   32269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0707 16:08:41.379338   32269 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0707 16:08:41.379378   32269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0707 16:08:41.389522   32269 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0707 16:08:41.389544   32269 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0707 16:08:41.389550   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:08:41.389648   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:08:41.402447   32269 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0707 16:08:41.402777   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0707 16:08:41.409298   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0707 16:08:41.415695   32269 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0707 16:08:41.415734   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0707 16:08:41.422221   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:08:41.428852   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0707 16:08:41.435480   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:08:41.442011   32269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0707 16:08:41.448679   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0707 16:08:41.455330   32269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0707 16:08:41.461097   32269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0707 16:08:41.461175   32269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0707 16:08:41.467205   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:41.551382   32269 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0707 16:08:41.564033   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:08:41.564108   32269 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0707 16:08:41.572808   32269 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0707 16:08:41.573462   32269 command_runner.go:130] > [Unit]
	I0707 16:08:41.573471   32269 command_runner.go:130] > Description=Docker Application Container Engine
	I0707 16:08:41.573476   32269 command_runner.go:130] > Documentation=https://docs.docker.com
	I0707 16:08:41.573480   32269 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0707 16:08:41.573485   32269 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0707 16:08:41.573491   32269 command_runner.go:130] > StartLimitBurst=3
	I0707 16:08:41.573495   32269 command_runner.go:130] > StartLimitIntervalSec=60
	I0707 16:08:41.573498   32269 command_runner.go:130] > [Service]
	I0707 16:08:41.573502   32269 command_runner.go:130] > Type=notify
	I0707 16:08:41.573505   32269 command_runner.go:130] > Restart=on-failure
	I0707 16:08:41.573515   32269 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0707 16:08:41.573529   32269 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0707 16:08:41.573537   32269 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0707 16:08:41.573543   32269 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0707 16:08:41.573548   32269 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0707 16:08:41.573554   32269 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0707 16:08:41.573560   32269 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0707 16:08:41.573571   32269 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0707 16:08:41.573578   32269 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0707 16:08:41.573583   32269 command_runner.go:130] > ExecStart=
	I0707 16:08:41.573596   32269 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0707 16:08:41.573604   32269 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0707 16:08:41.573611   32269 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0707 16:08:41.573616   32269 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0707 16:08:41.573620   32269 command_runner.go:130] > LimitNOFILE=infinity
	I0707 16:08:41.573625   32269 command_runner.go:130] > LimitNPROC=infinity
	I0707 16:08:41.573631   32269 command_runner.go:130] > LimitCORE=infinity
	I0707 16:08:41.573639   32269 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0707 16:08:41.573655   32269 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0707 16:08:41.573661   32269 command_runner.go:130] > TasksMax=infinity
	I0707 16:08:41.573665   32269 command_runner.go:130] > TimeoutStartSec=0
	I0707 16:08:41.573670   32269 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0707 16:08:41.573676   32269 command_runner.go:130] > Delegate=yes
	I0707 16:08:41.573684   32269 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0707 16:08:41.573688   32269 command_runner.go:130] > KillMode=process
	I0707 16:08:41.573693   32269 command_runner.go:130] > [Install]
	I0707 16:08:41.573706   32269 command_runner.go:130] > WantedBy=multi-user.target
	I0707 16:08:41.573785   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:08:41.582744   32269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0707 16:08:41.594339   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:08:41.603011   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:08:41.612260   32269 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0707 16:08:41.634151   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:08:41.647652   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:08:41.663289   32269 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0707 16:08:41.663800   32269 ssh_runner.go:195] Run: which cri-dockerd
	I0707 16:08:41.667093   32269 command_runner.go:130] > /usr/bin/cri-dockerd
	I0707 16:08:41.667387   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0707 16:08:41.676804   32269 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0707 16:08:41.694334   32269 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0707 16:08:41.787385   32269 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0707 16:08:41.877749   32269 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0707 16:08:41.877765   32269 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0707 16:08:41.889423   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:41.976291   32269 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0707 16:08:43.314053   32269 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.337714109s)
	I0707 16:08:43.314116   32269 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0707 16:08:43.397895   32269 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0707 16:08:43.482770   32269 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0707 16:08:43.575848   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:43.665156   32269 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0707 16:08:43.679827   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:08:43.776772   32269 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0707 16:08:43.831235   32269 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0707 16:08:43.831338   32269 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0707 16:08:43.834842   32269 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0707 16:08:43.834853   32269 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0707 16:08:43.834858   32269 command_runner.go:130] > Device: 16h/22d	Inode: 900         Links: 1
	I0707 16:08:43.834863   32269 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0707 16:08:43.834868   32269 command_runner.go:130] > Access: 2023-07-07 23:08:43.694255325 +0000
	I0707 16:08:43.834872   32269 command_runner.go:130] > Modify: 2023-07-07 23:08:43.694255325 +0000
	I0707 16:08:43.834876   32269 command_runner.go:130] > Change: 2023-07-07 23:08:43.698350968 +0000
	I0707 16:08:43.834880   32269 command_runner.go:130] >  Birth: -
	I0707 16:08:43.835371   32269 start.go:534] Will wait 60s for crictl version
	I0707 16:08:43.835423   32269 ssh_runner.go:195] Run: which crictl
	I0707 16:08:43.839790   32269 command_runner.go:130] > /usr/bin/crictl
	I0707 16:08:43.840052   32269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0707 16:08:43.865947   32269 command_runner.go:130] > Version:  0.1.0
	I0707 16:08:43.865960   32269 command_runner.go:130] > RuntimeName:  docker
	I0707 16:08:43.865964   32269 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0707 16:08:43.865968   32269 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0707 16:08:43.866855   32269 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0707 16:08:43.866939   32269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0707 16:08:43.883025   32269 command_runner.go:130] > 24.0.2
	I0707 16:08:43.883867   32269 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0707 16:08:43.899887   32269 command_runner.go:130] > 24.0.2
	I0707 16:08:43.924280   32269 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0707 16:08:43.924325   32269 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:08:43.924746   32269 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0707 16:08:43.929044   32269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0707 16:08:43.937109   32269 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 16:08:43.937162   32269 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0707 16:08:43.949683   32269 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.3
	I0707 16:08:43.949695   32269 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.3
	I0707 16:08:43.949703   32269 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.3
	I0707 16:08:43.949707   32269 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.3
	I0707 16:08:43.949711   32269 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0707 16:08:43.949715   32269 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0707 16:08:43.949719   32269 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0707 16:08:43.949723   32269 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0707 16:08:43.949727   32269 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0707 16:08:43.949732   32269 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0707 16:08:43.950217   32269 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0707 16:08:43.950231   32269 docker.go:566] Images already preloaded, skipping extraction
	I0707 16:08:43.950296   32269 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0707 16:08:43.962829   32269 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.3
	I0707 16:08:43.962845   32269 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.3
	I0707 16:08:43.962856   32269 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.3
	I0707 16:08:43.962862   32269 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.3
	I0707 16:08:43.962866   32269 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0707 16:08:43.962873   32269 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0707 16:08:43.962879   32269 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0707 16:08:43.962884   32269 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0707 16:08:43.962889   32269 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0707 16:08:43.962895   32269 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0707 16:08:43.963355   32269 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0707 16:08:43.963377   32269 cache_images.go:84] Images are preloaded, skipping loading
	I0707 16:08:43.963448   32269 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0707 16:08:43.980303   32269 command_runner.go:130] > cgroupfs
	I0707 16:08:43.980867   32269 cni.go:84] Creating CNI manager for ""
	I0707 16:08:43.980877   32269 cni.go:137] 2 nodes found, recommending kindnet
	I0707 16:08:43.980891   32269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0707 16:08:43.980906   32269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.55 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-136000 NodeName:multinode-136000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0707 16:08:43.980995   32269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-136000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0707 16:08:43.981055   32269 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-136000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0707 16:08:43.981114   32269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0707 16:08:43.987846   32269 command_runner.go:130] > kubeadm
	I0707 16:08:43.987858   32269 command_runner.go:130] > kubectl
	I0707 16:08:43.987862   32269 command_runner.go:130] > kubelet
	I0707 16:08:43.987979   32269 binaries.go:44] Found k8s binaries, skipping transfer
	I0707 16:08:43.988029   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0707 16:08:43.994279   32269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0707 16:08:44.005201   32269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0707 16:08:44.016225   32269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0707 16:08:44.027328   32269 ssh_runner.go:195] Run: grep 192.168.64.55	control-plane.minikube.internal$ /etc/hosts
	I0707 16:08:44.029559   32269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0707 16:08:44.037379   32269 certs.go:56] Setting up /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000 for IP: 192.168.64.55
	I0707 16:08:44.037393   32269 certs.go:190] acquiring lock for shared ca certs: {Name:mkd09f0b55668af08c319f1908565cfe1a95e4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:08:44.037555   32269 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.key
	I0707 16:08:44.037614   32269 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.key
	I0707 16:08:44.037696   32269 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.key
	I0707 16:08:44.037764   32269 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.key.07b57284
	I0707 16:08:44.037824   32269 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.key
	I0707 16:08:44.037833   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0707 16:08:44.037861   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0707 16:08:44.037887   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0707 16:08:44.037907   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0707 16:08:44.037926   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0707 16:08:44.037943   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0707 16:08:44.037960   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0707 16:08:44.037978   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0707 16:08:44.038072   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643.pem (1338 bytes)
	W0707 16:08:44.038118   32269 certs.go:433] ignoring /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643_empty.pem, impossibly tiny 0 bytes
	I0707 16:08:44.038129   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem (1679 bytes)
	I0707 16:08:44.038164   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem (1082 bytes)
	I0707 16:08:44.038197   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem (1123 bytes)
	I0707 16:08:44.038226   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem (1675 bytes)
	I0707 16:08:44.038290   32269 certs.go:437] found cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem (1708 bytes)
	I0707 16:08:44.038319   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.038340   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.038357   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643.pem -> /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.038740   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0707 16:08:44.054252   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0707 16:08:44.070311   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0707 16:08:44.085637   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0707 16:08:44.101541   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0707 16:08:44.116839   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0707 16:08:44.132462   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0707 16:08:44.147925   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0707 16:08:44.163266   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem --> /usr/share/ca-certificates/296432.pem (1708 bytes)
	I0707 16:08:44.178544   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0707 16:08:44.193815   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/29643.pem --> /usr/share/ca-certificates/29643.pem (1338 bytes)
	I0707 16:08:44.208883   32269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0707 16:08:44.220220   32269 ssh_runner.go:195] Run: openssl version
	I0707 16:08:44.223410   32269 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0707 16:08:44.223608   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296432.pem && ln -fs /usr/share/ca-certificates/296432.pem /etc/ssl/certs/296432.pem"
	I0707 16:08:44.230755   32269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.233434   32269 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  7 22:50 /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.233642   32269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  7 22:50 /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.233677   32269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296432.pem
	I0707 16:08:44.236885   32269 command_runner.go:130] > 3ec20f2e
	I0707 16:08:44.237079   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/296432.pem /etc/ssl/certs/3ec20f2e.0"
	I0707 16:08:44.244265   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0707 16:08:44.251283   32269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.253919   32269 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  7 22:44 /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.254063   32269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  7 22:44 /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.254096   32269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0707 16:08:44.257333   32269 command_runner.go:130] > b5213941
	I0707 16:08:44.257538   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0707 16:08:44.264486   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29643.pem && ln -fs /usr/share/ca-certificates/29643.pem /etc/ssl/certs/29643.pem"
	I0707 16:08:44.271548   32269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.274210   32269 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  7 22:50 /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.274393   32269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  7 22:50 /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.274424   32269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29643.pem
	I0707 16:08:44.277706   32269 command_runner.go:130] > 51391683
	I0707 16:08:44.277945   32269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/29643.pem /etc/ssl/certs/51391683.0"
	I0707 16:08:44.285047   32269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0707 16:08:44.287633   32269 command_runner.go:130] > ca.crt
	I0707 16:08:44.287644   32269 command_runner.go:130] > ca.key
	I0707 16:08:44.287654   32269 command_runner.go:130] > healthcheck-client.crt
	I0707 16:08:44.287659   32269 command_runner.go:130] > healthcheck-client.key
	I0707 16:08:44.287663   32269 command_runner.go:130] > peer.crt
	I0707 16:08:44.287667   32269 command_runner.go:130] > peer.key
	I0707 16:08:44.287670   32269 command_runner.go:130] > server.crt
	I0707 16:08:44.287673   32269 command_runner.go:130] > server.key
	I0707 16:08:44.287842   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0707 16:08:44.291146   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.291335   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0707 16:08:44.294719   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.294920   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0707 16:08:44.298233   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.298420   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0707 16:08:44.301688   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.301884   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0707 16:08:44.305211   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.305423   32269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0707 16:08:44.308701   32269 command_runner.go:130] > Certificate will not expire
	I0707 16:08:44.308893   32269 kubeadm.go:404] StartCluster: {Name:multinode-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.27.3 ClusterName:multinode-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.56 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 16:08:44.308997   32269 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0707 16:08:44.322413   32269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0707 16:08:44.329076   32269 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0707 16:08:44.329086   32269 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0707 16:08:44.329091   32269 command_runner.go:130] > /var/lib/minikube/etcd:
	I0707 16:08:44.329094   32269 command_runner.go:130] > member
	I0707 16:08:44.329128   32269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0707 16:08:44.329146   32269 kubeadm.go:636] restartCluster start
	I0707 16:08:44.329187   32269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0707 16:08:44.335693   32269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:44.335980   32269 kubeconfig.go:135] verify returned: extract IP: "multinode-136000" does not appear in /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:08:44.336048   32269 kubeconfig.go:146] "multinode-136000" context is missing from /Users/jenkins/minikube-integration/16845-29196/kubeconfig - will repair!
	I0707 16:08:44.336209   32269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16845-29196/kubeconfig: {Name:mkd0efbd118d508759ab2c0498693bc4c84ef656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:08:44.336801   32269 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:08:44.336976   32269 kapi.go:59] client config for multinode-136000: &rest.Config{Host:"https://192.168.64.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.key", CAFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0707 16:08:44.337453   32269 cert_rotation.go:137] Starting client certificate rotation controller
	I0707 16:08:44.337614   32269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0707 16:08:44.343808   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:44.343845   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:44.352209   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:44.854326   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:44.854488   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:44.865603   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:45.354267   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:45.354405   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:45.365901   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:45.854398   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:45.854547   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:45.865503   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:46.354376   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:46.354582   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:46.366017   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:46.854381   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:46.854538   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:46.866310   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:47.354410   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:47.354563   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:47.366380   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:47.854421   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:47.854575   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:47.865241   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:48.353084   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:48.353267   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:48.364610   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:48.853374   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:48.853531   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:48.864489   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:49.354426   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:49.354633   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:49.366393   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:49.853192   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:49.853302   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:49.862923   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:50.354479   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:50.354633   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:50.365102   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:50.854598   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:50.854769   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:50.864504   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:51.354496   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:51.354652   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:51.366012   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:51.854498   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:51.854657   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:51.866081   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:52.354502   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:52.354700   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:52.365767   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:52.854549   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:52.854721   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:52.865216   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:53.353325   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:53.353433   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:53.364628   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:53.854579   32269 api_server.go:166] Checking apiserver status ...
	I0707 16:08:53.854708   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0707 16:08:53.865742   32269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0707 16:08:54.345206   32269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0707 16:08:54.345236   32269 kubeadm.go:1128] stopping kube-system containers ...
	I0707 16:08:54.345351   32269 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0707 16:08:54.364286   32269 command_runner.go:130] > a518f066f2a8
	I0707 16:08:54.364297   32269 command_runner.go:130] > 5446c9eb3ec8
	I0707 16:08:54.364300   32269 command_runner.go:130] > 3b27f9dc5b00
	I0707 16:08:54.364303   32269 command_runner.go:130] > b1b16ce0e1c2
	I0707 16:08:54.364307   32269 command_runner.go:130] > 55a8f58d8c0e
	I0707 16:08:54.364310   32269 command_runner.go:130] > df2ce2928fd1
	I0707 16:08:54.364314   32269 command_runner.go:130] > 76e1078f7728
	I0707 16:08:54.364318   32269 command_runner.go:130] > 116c42927310
	I0707 16:08:54.364324   32269 command_runner.go:130] > 2f325ef45b4f
	I0707 16:08:54.364331   32269 command_runner.go:130] > de3cae1acc39
	I0707 16:08:54.364337   32269 command_runner.go:130] > b2c1151ec663
	I0707 16:08:54.364343   32269 command_runner.go:130] > 50f3c898eb77
	I0707 16:08:54.364348   32269 command_runner.go:130] > 317ce02a7796
	I0707 16:08:54.364352   32269 command_runner.go:130] > 1cd6ba509687
	I0707 16:08:54.364355   32269 command_runner.go:130] > 9278b14b49d4
	I0707 16:08:54.364359   32269 command_runner.go:130] > d462026e5304
	I0707 16:08:54.364362   32269 command_runner.go:130] > ef7a96b917fd
	I0707 16:08:54.364366   32269 command_runner.go:130] > bcff6ac1bb02
	I0707 16:08:54.364369   32269 command_runner.go:130] > 93d5297f53d3
	I0707 16:08:54.364375   32269 command_runner.go:130] > 1e81fc329386
	I0707 16:08:54.364378   32269 command_runner.go:130] > cd3e620f0d40
	I0707 16:08:54.364382   32269 command_runner.go:130] > bb551cae3442
	I0707 16:08:54.364385   32269 command_runner.go:130] > 550e6ada05cb
	I0707 16:08:54.364388   32269 command_runner.go:130] > deb47344a0c7
	I0707 16:08:54.364392   32269 command_runner.go:130] > e209537350e5
	I0707 16:08:54.364395   32269 command_runner.go:130] > 69a988d9753c
	I0707 16:08:54.364398   32269 command_runner.go:130] > d9bf8dafc1ef
	I0707 16:08:54.364401   32269 command_runner.go:130] > d7bfdc2352e7
	I0707 16:08:54.364405   32269 command_runner.go:130] > 1b78fb311f21
	I0707 16:08:54.364408   32269 command_runner.go:130] > 6725ed88dcdf
	I0707 16:08:54.364412   32269 command_runner.go:130] > bbb8888a48de
	I0707 16:08:54.364424   32269 docker.go:462] Stopping containers: [a518f066f2a8 5446c9eb3ec8 3b27f9dc5b00 b1b16ce0e1c2 55a8f58d8c0e df2ce2928fd1 76e1078f7728 116c42927310 2f325ef45b4f de3cae1acc39 b2c1151ec663 50f3c898eb77 317ce02a7796 1cd6ba509687 9278b14b49d4 d462026e5304 ef7a96b917fd bcff6ac1bb02 93d5297f53d3 1e81fc329386 cd3e620f0d40 bb551cae3442 550e6ada05cb deb47344a0c7 e209537350e5 69a988d9753c d9bf8dafc1ef d7bfdc2352e7 1b78fb311f21 6725ed88dcdf bbb8888a48de]
	I0707 16:08:54.364495   32269 ssh_runner.go:195] Run: docker stop a518f066f2a8 5446c9eb3ec8 3b27f9dc5b00 b1b16ce0e1c2 55a8f58d8c0e df2ce2928fd1 76e1078f7728 116c42927310 2f325ef45b4f de3cae1acc39 b2c1151ec663 50f3c898eb77 317ce02a7796 1cd6ba509687 9278b14b49d4 d462026e5304 ef7a96b917fd bcff6ac1bb02 93d5297f53d3 1e81fc329386 cd3e620f0d40 bb551cae3442 550e6ada05cb deb47344a0c7 e209537350e5 69a988d9753c d9bf8dafc1ef d7bfdc2352e7 1b78fb311f21 6725ed88dcdf bbb8888a48de
	I0707 16:08:54.379408   32269 command_runner.go:130] > a518f066f2a8
	I0707 16:08:54.379459   32269 command_runner.go:130] > 5446c9eb3ec8
	I0707 16:08:54.379464   32269 command_runner.go:130] > 3b27f9dc5b00
	I0707 16:08:54.379472   32269 command_runner.go:130] > b1b16ce0e1c2
	I0707 16:08:54.379476   32269 command_runner.go:130] > 55a8f58d8c0e
	I0707 16:08:54.379480   32269 command_runner.go:130] > df2ce2928fd1
	I0707 16:08:54.379641   32269 command_runner.go:130] > 76e1078f7728
	I0707 16:08:54.379648   32269 command_runner.go:130] > 116c42927310
	I0707 16:08:54.379652   32269 command_runner.go:130] > 2f325ef45b4f
	I0707 16:08:54.379656   32269 command_runner.go:130] > de3cae1acc39
	I0707 16:08:54.379659   32269 command_runner.go:130] > b2c1151ec663
	I0707 16:08:54.379663   32269 command_runner.go:130] > 50f3c898eb77
	I0707 16:08:54.379666   32269 command_runner.go:130] > 317ce02a7796
	I0707 16:08:54.379670   32269 command_runner.go:130] > 1cd6ba509687
	I0707 16:08:54.379673   32269 command_runner.go:130] > 9278b14b49d4
	I0707 16:08:54.379676   32269 command_runner.go:130] > d462026e5304
	I0707 16:08:54.379681   32269 command_runner.go:130] > ef7a96b917fd
	I0707 16:08:54.379684   32269 command_runner.go:130] > bcff6ac1bb02
	I0707 16:08:54.379688   32269 command_runner.go:130] > 93d5297f53d3
	I0707 16:08:54.379694   32269 command_runner.go:130] > 1e81fc329386
	I0707 16:08:54.379698   32269 command_runner.go:130] > cd3e620f0d40
	I0707 16:08:54.379707   32269 command_runner.go:130] > bb551cae3442
	I0707 16:08:54.379712   32269 command_runner.go:130] > 550e6ada05cb
	I0707 16:08:54.379716   32269 command_runner.go:130] > deb47344a0c7
	I0707 16:08:54.379719   32269 command_runner.go:130] > e209537350e5
	I0707 16:08:54.379723   32269 command_runner.go:130] > 69a988d9753c
	I0707 16:08:54.379726   32269 command_runner.go:130] > d9bf8dafc1ef
	I0707 16:08:54.379730   32269 command_runner.go:130] > d7bfdc2352e7
	I0707 16:08:54.379733   32269 command_runner.go:130] > 1b78fb311f21
	I0707 16:08:54.379737   32269 command_runner.go:130] > 6725ed88dcdf
	I0707 16:08:54.379740   32269 command_runner.go:130] > bbb8888a48de
	I0707 16:08:54.380488   32269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0707 16:08:54.393263   32269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0707 16:08:54.400009   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0707 16:08:54.400019   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0707 16:08:54.400024   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0707 16:08:54.400031   32269 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0707 16:08:54.400140   32269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0707 16:08:54.400180   32269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0707 16:08:54.406745   32269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0707 16:08:54.406762   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:54.474728   32269 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0707 16:08:54.474973   32269 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0707 16:08:54.475328   32269 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0707 16:08:54.475614   32269 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0707 16:08:54.475996   32269 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0707 16:08:54.476373   32269 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0707 16:08:54.476824   32269 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0707 16:08:54.477172   32269 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0707 16:08:54.477532   32269 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0707 16:08:54.477809   32269 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0707 16:08:54.478188   32269 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0707 16:08:54.479247   32269 command_runner.go:130] > [certs] Using the existing "sa" key
	I0707 16:08:54.479283   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:54.520074   32269 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0707 16:08:54.590387   32269 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0707 16:08:54.827680   32269 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0707 16:08:54.893985   32269 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0707 16:08:55.087888   32269 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0707 16:08:55.089870   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:55.140213   32269 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0707 16:08:55.141077   32269 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0707 16:08:55.141281   32269 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0707 16:08:55.246438   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:55.294366   32269 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0707 16:08:55.294380   32269 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0707 16:08:55.297942   32269 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0707 16:08:55.298676   32269 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0707 16:08:55.300049   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:08:55.339145   32269 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0707 16:08:55.349017   32269 api_server.go:52] waiting for apiserver process to appear ...
	I0707 16:08:55.349095   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:55.858207   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:56.358024   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:56.857866   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:57.358507   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:57.857883   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:08:57.867774   32269 command_runner.go:130] > 1700
	I0707 16:08:57.867991   32269 api_server.go:72] duration metric: took 2.518920572s to wait for apiserver process to appear ...
	I0707 16:08:57.868002   32269 api_server.go:88] waiting for apiserver healthz status ...
	I0707 16:08:57.868015   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:00.630569   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0707 16:09:00.630595   32269 api_server.go:103] status: https://192.168.64.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0707 16:09:01.132143   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:01.137719   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0707 16:09:01.137735   32269 api_server.go:103] status: https://192.168.64.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0707 16:09:01.631260   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:01.638340   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0707 16:09:01.638357   32269 api_server.go:103] status: https://192.168.64.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0707 16:09:02.130776   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:02.134171   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 200:
	ok
	I0707 16:09:02.134227   32269 round_trippers.go:463] GET https://192.168.64.55:8443/version
	I0707 16:09:02.134232   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:02.134247   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:02.134253   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:02.140004   32269 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0707 16:09:02.140017   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:02.140023   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:02.140028   32269 round_trippers.go:580]     Content-Length: 263
	I0707 16:09:02.140033   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:02 GMT
	I0707 16:09:02.140037   32269 round_trippers.go:580]     Audit-Id: 247db14d-bb03-46ad-ba2e-78fb14351827
	I0707 16:09:02.140042   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:02.140046   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:02.140052   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:02.140068   32269 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0707 16:09:02.140114   32269 api_server.go:141] control plane version: v1.27.3
	I0707 16:09:02.140123   32269 api_server.go:131] duration metric: took 4.272022782s to wait for apiserver health ...
	I0707 16:09:02.140129   32269 cni.go:84] Creating CNI manager for ""
	I0707 16:09:02.140135   32269 cni.go:137] 2 nodes found, recommending kindnet
	I0707 16:09:02.178134   32269 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0707 16:09:02.215039   32269 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0707 16:09:02.220489   32269 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0707 16:09:02.220502   32269 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0707 16:09:02.220507   32269 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0707 16:09:02.220513   32269 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0707 16:09:02.220517   32269 command_runner.go:130] > Access: 2023-07-07 23:08:37.211250078 +0000
	I0707 16:09:02.220522   32269 command_runner.go:130] > Modify: 2023-06-30 22:28:30.000000000 +0000
	I0707 16:09:02.220527   32269 command_runner.go:130] > Change: 2023-07-07 23:08:35.896250169 +0000
	I0707 16:09:02.220530   32269 command_runner.go:130] >  Birth: -
	I0707 16:09:02.220560   32269 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0707 16:09:02.220567   32269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0707 16:09:02.252321   32269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0707 16:09:03.177128   32269 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0707 16:09:03.179552   32269 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0707 16:09:03.181208   32269 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0707 16:09:03.189806   32269 command_runner.go:130] > daemonset.apps/kindnet configured
	I0707 16:09:03.211313   32269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0707 16:09:03.211409   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:03.211419   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.211434   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.211445   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.215768   32269 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0707 16:09:03.215782   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.215788   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.215793   32269 round_trippers.go:580]     Audit-Id: 5b1c3cf4-fcc3-40ca-8919-b9d3264790e0
	I0707 16:09:03.215799   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.215806   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.215813   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.215820   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.216660   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1098"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84962 chars]
	I0707 16:09:03.219748   32269 system_pods.go:59] 12 kube-system pods found
	I0707 16:09:03.219763   32269 system_pods.go:61] "coredns-5d78c9869d-78qmb" [d9671f13-fa08-4161-b216-53f645b9a1c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0707 16:09:03.219769   32269 system_pods.go:61] "etcd-multinode-136000" [636b837f-c544-4688-aa2b-2f602c1546c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0707 16:09:03.219773   32269 system_pods.go:61] "kindnet-gj2vg" [596c8647-685e-449c-86c0-9aeb7dddb2f5] Running
	I0707 16:09:03.219778   32269 system_pods.go:61] "kindnet-h8rpq" [30c883b3-9941-48da-a543-d1649a5418f9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0707 16:09:03.219782   32269 system_pods.go:61] "kindnet-zpx7k" [179bc03c-a64f-48bc-9bb9-52e5c91e5037] Running
	I0707 16:09:03.219786   32269 system_pods.go:61] "kube-apiserver-multinode-136000" [e33f6220-5f99-43a2-adc8-49399f82e89c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0707 16:09:03.219792   32269 system_pods.go:61] "kube-controller-manager-multinode-136000" [a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0707 16:09:03.219796   32269 system_pods.go:61] "kube-proxy-5865g" [3b0f7832-d4d7-41e7-ab55-08284cf98427] Running
	I0707 16:09:03.219800   32269 system_pods.go:61] "kube-proxy-dvrg9" [f7473507-c702-444e-b727-71c8a8cc4c08] Running
	I0707 16:09:03.219808   32269 system_pods.go:61] "kube-proxy-wd4p8" [4979ea40-a983-4f80-b7ac-f6e05cd5f6b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0707 16:09:03.219815   32269 system_pods.go:61] "kube-scheduler-multinode-136000" [90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0707 16:09:03.219821   32269 system_pods.go:61] "storage-provisioner" [e617383f-c16f-44a7-a1a4-a2813ecc84f2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0707 16:09:03.219826   32269 system_pods.go:74] duration metric: took 8.5022ms to wait for pod list to return data ...
	I0707 16:09:03.219834   32269 node_conditions.go:102] verifying NodePressure condition ...
	I0707 16:09:03.219871   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes
	I0707 16:09:03.219875   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.219881   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.219888   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.221711   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.221722   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.221728   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.221733   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.221739   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.221744   32269 round_trippers.go:580]     Audit-Id: 12f0975b-b999-4702-b63c-2ebacf21d7d1
	I0707 16:09:03.221748   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.221754   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.221847   32269 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1098"},"items":[{"metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9599 chars]
	I0707 16:09:03.222287   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:03.222302   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:03.222311   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:03.222315   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:03.222322   32269 node_conditions.go:105] duration metric: took 2.481054ms to run NodePressure ...
	I0707 16:09:03.222332   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0707 16:09:03.321474   32269 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0707 16:09:03.356359   32269 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0707 16:09:03.357206   32269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0707 16:09:03.357263   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0707 16:09:03.357269   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.357275   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.357280   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.359604   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:03.359616   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.359625   32269 round_trippers.go:580]     Audit-Id: d7c77248-1633-421c-bf05-8688256fbcc6
	I0707 16:09:03.359632   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.359639   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.359660   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.359671   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.359679   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.359926   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1100"},"items":[{"metadata":{"name":"etcd-multinode-136000","namespace":"kube-system","uid":"636b837f-c544-4688-aa2b-2f602c1546c6","resourceVersion":"1090","creationTimestamp":"2023-07-07T23:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.55:2379","kubernetes.io/config.hash":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.mirror":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.seen":"2023-07-07T23:02:20.447968150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29768 chars]
	I0707 16:09:03.360649   32269 kubeadm.go:787] kubelet initialised
	I0707 16:09:03.360658   32269 kubeadm.go:788] duration metric: took 3.442943ms waiting for restarted kubelet to initialise ...
	I0707 16:09:03.360665   32269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:03.360703   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:03.360708   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.360714   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.360721   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.363439   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:03.363448   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.363456   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.363480   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.363492   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.363499   32269 round_trippers.go:580]     Audit-Id: 6c3873fb-46ad-4235-8c4a-9de668256e71
	I0707 16:09:03.363505   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.363510   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.364961   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1100"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84962 chars]
	I0707 16:09:03.367273   32269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.367312   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:03.367317   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.367323   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.367329   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.369039   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.369052   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.369063   32269 round_trippers.go:580]     Audit-Id: 9f12d6f5-8206-4159-9f38-0abe8bdf661d
	I0707 16:09:03.369073   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.369082   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.369090   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.369099   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.369108   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.369322   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:03.369565   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.369571   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.369577   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.369584   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.371132   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.371141   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.371147   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.371152   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.371158   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.371162   32269 round_trippers.go:580]     Audit-Id: 447f2f44-5cc7-4191-8925-a6d8bb1e999f
	I0707 16:09:03.371168   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.371172   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.371321   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.371513   32269 pod_ready.go:97] node "multinode-136000" hosting pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.371522   32269 pod_ready.go:81] duration metric: took 4.238569ms waiting for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.371527   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.371533   32269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.371557   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-136000
	I0707 16:09:03.371561   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.371567   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.371573   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.372937   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.372944   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.372950   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.372954   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.372959   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.372965   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.372969   32269 round_trippers.go:580]     Audit-Id: b94d4be0-cd77-491f-8b2d-3a797f785a3a
	I0707 16:09:03.372975   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.373258   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-136000","namespace":"kube-system","uid":"636b837f-c544-4688-aa2b-2f602c1546c6","resourceVersion":"1090","creationTimestamp":"2023-07-07T23:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.55:2379","kubernetes.io/config.hash":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.mirror":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.seen":"2023-07-07T23:02:20.447968150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
	I0707 16:09:03.373463   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.373470   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.373476   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.373482   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.374725   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.374732   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.374737   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.374742   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.374746   32269 round_trippers.go:580]     Audit-Id: 0a372861-8401-416a-b28a-2693b4146ff6
	I0707 16:09:03.374751   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.374756   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.374761   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.374988   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.375155   32269 pod_ready.go:97] node "multinode-136000" hosting pod "etcd-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.375163   32269 pod_ready.go:81] duration metric: took 3.626176ms waiting for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.375168   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "etcd-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.375177   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.375204   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-136000
	I0707 16:09:03.375209   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.375214   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.375220   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.376982   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.376994   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.377002   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.377009   32269 round_trippers.go:580]     Audit-Id: 06924715-62a6-441f-9141-88242ec7a0bb
	I0707 16:09:03.377018   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.377023   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.377029   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.377034   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.377299   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-136000","namespace":"kube-system","uid":"e33f6220-5f99-43a2-adc8-49399f82e89c","resourceVersion":"1088","creationTimestamp":"2023-07-07T23:02:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.55:8443","kubernetes.io/config.hash":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.mirror":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.seen":"2023-07-07T23:02:20.447888975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7853 chars]
	I0707 16:09:03.377521   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.377527   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.377533   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.377539   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.379566   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:03.379577   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.379586   32269 round_trippers.go:580]     Audit-Id: e90f5122-858c-485b-bf73-6eebac21bf2d
	I0707 16:09:03.379594   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.379600   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.379605   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.379610   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.379615   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.379995   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.380194   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-apiserver-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.380203   32269 pod_ready.go:81] duration metric: took 5.020046ms waiting for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.380208   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-apiserver-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.380217   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.411767   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-136000
	I0707 16:09:03.411794   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.411841   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.411853   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.415852   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:03.415867   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.415875   32269 round_trippers.go:580]     Audit-Id: 20628caf-91cc-4b0e-adf6-565b561b20d0
	I0707 16:09:03.415882   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.415889   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.415896   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.415903   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.415911   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.416184   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-136000","namespace":"kube-system","uid":"a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9","resourceVersion":"1091","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.mirror":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.seen":"2023-07-07T23:02:28.360407979Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0707 16:09:03.611694   32269 request.go:628] Waited for 195.204208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.611727   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:03.611748   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.611759   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.611766   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.613566   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:03.613578   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.613584   32269 round_trippers.go:580]     Audit-Id: 24f0011a-6858-4477-a3f6-ecdc3ced2a11
	I0707 16:09:03.613589   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.613618   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.613627   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.613633   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.613638   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.613794   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:03.613991   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-controller-manager-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.614002   32269 pod_ready.go:81] duration metric: took 233.773711ms waiting for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:03.614014   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-controller-manager-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:03.614020   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:03.812491   32269 request.go:628] Waited for 198.402739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5865g
	I0707 16:09:03.812620   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5865g
	I0707 16:09:03.812632   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:03.812645   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:03.812656   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:03.815972   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:03.815991   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:03.816005   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:03.816013   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:03 GMT
	I0707 16:09:03.816019   32269 round_trippers.go:580]     Audit-Id: 2fa3074a-76a3-4b84-bdc5-caa672c704e1
	I0707 16:09:03.816026   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:03.816033   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:03.816040   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:03.816186   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5865g","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b0f7832-d4d7-41e7-ab55-08284cf98427","resourceVersion":"1059","creationTimestamp":"2023-07-07T23:04:00Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0707 16:09:04.011805   32269 request.go:628] Waited for 195.171757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m03
	I0707 16:09:04.011859   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m03
	I0707 16:09:04.011869   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.011881   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.011894   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.014913   32269 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0707 16:09:04.014930   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.014939   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.014945   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.014952   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.014959   32269 round_trippers.go:580]     Content-Length: 210
	I0707 16:09:04.014967   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.014974   32269 round_trippers.go:580]     Audit-Id: 01c31788-f61b-4aaa-8367-9f1c7a777ca9
	I0707 16:09:04.014981   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.015007   32269 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-136000-m03\" not found","reason":"NotFound","details":{"name":"multinode-136000-m03","kind":"nodes"},"code":404}
	I0707 16:09:04.015163   32269 pod_ready.go:97] node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:04.015175   32269 pod_ready.go:81] duration metric: took 401.139454ms waiting for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:04.015182   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:04.015191   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:04.212662   32269 request.go:628] Waited for 197.411549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:04.212760   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:04.212770   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.212783   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.212794   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.216073   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:04.216089   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.216097   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.216103   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.216139   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.216153   32269 round_trippers.go:580]     Audit-Id: 341f654a-d4b7-4f27-9fcf-0190bfd343bd
	I0707 16:09:04.216161   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.216168   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.216275   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvrg9","generateName":"kube-proxy-","namespace":"kube-system","uid":"f7473507-c702-444e-b727-71c8a8cc4c08","resourceVersion":"936","creationTimestamp":"2023-07-07T23:03:17Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0707 16:09:04.411776   32269 request.go:628] Waited for 195.153203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:04.411828   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:04.411839   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.411889   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.411902   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.414863   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:04.414880   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.414888   32269 round_trippers.go:580]     Audit-Id: 85362d09-ed4f-4bdf-b984-2ec35681340c
	I0707 16:09:04.414895   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.414901   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.414909   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.414916   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.414930   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.415022   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000-m02","uid":"e53ac27c-579d-4edc-87f1-2f80a931d265","resourceVersion":"955","creationTimestamp":"2023-07-07T23:06:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:06:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3252 chars]
	I0707 16:09:04.415226   32269 pod_ready.go:92] pod "kube-proxy-dvrg9" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:04.415236   32269 pod_ready.go:81] duration metric: took 400.030172ms waiting for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:04.415244   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:04.613530   32269 request.go:628] Waited for 198.23456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:04.613584   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:04.613593   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.613606   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.613618   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.616532   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:04.616550   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.616558   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.616565   32269 round_trippers.go:580]     Audit-Id: cba2ae13-a23d-49e5-a805-a26ff13b5413
	I0707 16:09:04.616581   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.616589   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.616597   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.616604   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.616712   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wd4p8","generateName":"kube-proxy-","namespace":"kube-system","uid":"4979ea40-a983-4f80-b7ac-f6e05cd5f6b4","resourceVersion":"1096","creationTimestamp":"2023-07-07T23:02:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
	I0707 16:09:04.812467   32269 request.go:628] Waited for 195.386339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:04.812498   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:04.812503   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:04.812510   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:04.812515   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:04.814100   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:04.814116   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:04.814125   32269 round_trippers.go:580]     Audit-Id: 5556d0d6-ff62-4a1b-8f7c-0db11c482f7f
	I0707 16:09:04.814131   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:04.814136   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:04.814142   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:04.814146   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:04.814152   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:04 GMT
	I0707 16:09:04.814292   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:04.814486   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-proxy-wd4p8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:04.814496   32269 pod_ready.go:81] duration metric: took 399.23728ms waiting for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:04.814501   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-proxy-wd4p8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:04.814508   32269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:05.011652   32269 request.go:628] Waited for 197.082234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:05.011684   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:05.011695   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.011703   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.011710   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.013586   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:05.013597   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.013602   32269 round_trippers.go:580]     Audit-Id: 75fb5ab7-fd6f-43ca-96e8-588b38779806
	I0707 16:09:05.013608   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.013613   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.013618   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.013623   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.013628   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.013703   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-136000","namespace":"kube-system","uid":"90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e","resourceVersion":"1089","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.mirror":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.seen":"2023-07-07T23:02:28.360408566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0707 16:09:05.213129   32269 request.go:628] Waited for 199.169502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.213240   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.213284   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.213298   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.213310   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.215952   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:05.215967   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.215975   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.215982   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.215989   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.215997   32269 round_trippers.go:580]     Audit-Id: a3afa5d0-2161-4b84-923e-681ab6734cd0
	I0707 16:09:05.216003   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.216011   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.216082   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:05.216335   32269 pod_ready.go:97] node "multinode-136000" hosting pod "kube-scheduler-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:05.216350   32269 pod_ready.go:81] duration metric: took 401.828376ms waiting for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:05.216358   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000" hosting pod "kube-scheduler-multinode-136000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-136000" has status "Ready":"False"
	I0707 16:09:05.216366   32269 pod_ready.go:38] duration metric: took 1.855652353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:05.216378   32269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0707 16:09:05.224207   32269 command_runner.go:130] > -16
	I0707 16:09:05.224319   32269 ops.go:34] apiserver oom_adj: -16
	I0707 16:09:05.224327   32269 kubeadm.go:640] restartCluster took 20.894718755s
	I0707 16:09:05.224332   32269 kubeadm.go:406] StartCluster complete in 20.914984837s
	I0707 16:09:05.224340   32269 settings.go:142] acquiring lock: {Name:mk51b97c743cd3c6fc8ca8d160602ac40ac51808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:09:05.224427   32269 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:09:05.224810   32269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16845-29196/kubeconfig: {Name:mkd0efbd118d508759ab2c0498693bc4c84ef656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 16:09:05.225043   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0707 16:09:05.225074   32269 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0707 16:09:05.225235   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:09:05.225436   32269 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 16:09:05.268934   32269 out.go:177] * Enabled addons: 
	I0707 16:09:05.290207   32269 addons.go:499] enable addons completed in 65.134152ms: enabled=[]
	I0707 16:09:05.269173   32269 kapi.go:59] client config for multinode-136000: &rest.Config{Host:"https://192.168.64.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/client.key", CAFile:"/Users/jenkins/minikube-integration/16845-29196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0707 16:09:05.290490   32269 round_trippers.go:463] GET https://192.168.64.55:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0707 16:09:05.290497   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.290503   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.290509   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.292422   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:05.292434   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.292441   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.292446   32269 round_trippers.go:580]     Content-Length: 292
	I0707 16:09:05.292451   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.292456   32269 round_trippers.go:580]     Audit-Id: c0d7b9d3-dfb7-453e-8970-d3260b363917
	I0707 16:09:05.292461   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.292466   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.292471   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.292485   32269 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5f7a01b1-7a53-49df-8161-430fd40f925b","resourceVersion":"1099","creationTimestamp":"2023-07-07T23:02:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0707 16:09:05.292591   32269 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-136000" context rescaled to 1 replicas
	I0707 16:09:05.292610   32269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.55 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0707 16:09:05.304592   32269 command_runner.go:130] > apiVersion: v1
	I0707 16:09:05.314246   32269 command_runner.go:130] > data:
	I0707 16:09:05.314246   32269 out.go:177] * Verifying Kubernetes components...
	I0707 16:09:05.314256   32269 command_runner.go:130] >   Corefile: |
	I0707 16:09:05.314269   32269 command_runner.go:130] >     .:53 {
	I0707 16:09:05.335055   32269 command_runner.go:130] >         log
	I0707 16:09:05.335071   32269 command_runner.go:130] >         errors
	I0707 16:09:05.335077   32269 command_runner.go:130] >         health {
	I0707 16:09:05.335082   32269 command_runner.go:130] >            lameduck 5s
	I0707 16:09:05.335085   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0707 16:09:05.335086   32269 command_runner.go:130] >         }
	I0707 16:09:05.335095   32269 command_runner.go:130] >         ready
	I0707 16:09:05.335100   32269 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0707 16:09:05.335105   32269 command_runner.go:130] >            pods insecure
	I0707 16:09:05.335116   32269 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0707 16:09:05.335123   32269 command_runner.go:130] >            ttl 30
	I0707 16:09:05.335127   32269 command_runner.go:130] >         }
	I0707 16:09:05.335131   32269 command_runner.go:130] >         prometheus :9153
	I0707 16:09:05.335134   32269 command_runner.go:130] >         hosts {
	I0707 16:09:05.335138   32269 command_runner.go:130] >            192.168.64.1 host.minikube.internal
	I0707 16:09:05.335142   32269 command_runner.go:130] >            fallthrough
	I0707 16:09:05.335145   32269 command_runner.go:130] >         }
	I0707 16:09:05.335150   32269 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0707 16:09:05.335155   32269 command_runner.go:130] >            max_concurrent 1000
	I0707 16:09:05.335163   32269 command_runner.go:130] >         }
	I0707 16:09:05.335167   32269 command_runner.go:130] >         cache 30
	I0707 16:09:05.335172   32269 command_runner.go:130] >         loop
	I0707 16:09:05.335176   32269 command_runner.go:130] >         reload
	I0707 16:09:05.335180   32269 command_runner.go:130] >         loadbalance
	I0707 16:09:05.335184   32269 command_runner.go:130] >     }
	I0707 16:09:05.335187   32269 command_runner.go:130] > kind: ConfigMap
	I0707 16:09:05.335190   32269 command_runner.go:130] > metadata:
	I0707 16:09:05.335194   32269 command_runner.go:130] >   creationTimestamp: "2023-07-07T23:02:28Z"
	I0707 16:09:05.335198   32269 command_runner.go:130] >   name: coredns
	I0707 16:09:05.335201   32269 command_runner.go:130] >   namespace: kube-system
	I0707 16:09:05.335205   32269 command_runner.go:130] >   resourceVersion: "362"
	I0707 16:09:05.335209   32269 command_runner.go:130] >   uid: 871d2be9-274d-4c69-bf51-609656806846
	I0707 16:09:05.335279   32269 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0707 16:09:05.345563   32269 node_ready.go:35] waiting up to 6m0s for node "multinode-136000" to be "Ready" ...
	I0707 16:09:05.412421   32269 request.go:628] Waited for 66.7956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.412545   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.412558   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.412578   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.412590   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.415359   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:05.415375   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.415383   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.415390   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.415397   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.415404   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:05 GMT
	I0707 16:09:05.415413   32269 round_trippers.go:580]     Audit-Id: 80a09fbe-872e-4b22-85ef-4eab5151afb2
	I0707 16:09:05.415419   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.415514   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:05.916579   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:05.916600   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:05.916613   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:05.916623   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:05.920140   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:05.920157   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:05.920165   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:05.920172   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:05.920181   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:06 GMT
	I0707 16:09:05.920189   32269 round_trippers.go:580]     Audit-Id: dfdb7d72-33cd-40cb-bef4-19fd88e22b44
	I0707 16:09:05.920195   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:05.920202   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:05.920324   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:06.417273   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:06.417289   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:06.417298   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:06.417307   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:06.419481   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:06.419490   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:06.419496   32269 round_trippers.go:580]     Audit-Id: a96553df-a429-4dd9-ac53-66f1f808da09
	I0707 16:09:06.419503   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:06.419511   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:06.419521   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:06.419532   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:06.419537   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:06 GMT
	I0707 16:09:06.419665   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:06.916475   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:06.916496   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:06.916508   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:06.916518   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:06.919559   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:06.919574   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:06.919582   32269 round_trippers.go:580]     Audit-Id: b8c8cfba-cd73-4bc8-8530-46440084c6df
	I0707 16:09:06.919589   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:06.919595   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:06.919604   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:06.919615   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:06.919626   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:07 GMT
	I0707 16:09:06.919702   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:07.417621   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:07.417643   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:07.417656   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:07.417670   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:07.420710   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:07.420727   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:07.420736   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:07 GMT
	I0707 16:09:07.420751   32269 round_trippers.go:580]     Audit-Id: 832ae0d6-9076-45bd-b1ce-8732feff964b
	I0707 16:09:07.420759   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:07.420767   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:07.420775   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:07.420781   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:07.420867   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:07.421120   32269 node_ready.go:58] node "multinode-136000" has status "Ready":"False"
	I0707 16:09:07.916898   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:07.916912   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:07.916920   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:07.916967   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:07.918333   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:07.918342   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:07.918348   32269 round_trippers.go:580]     Audit-Id: 6c3d512a-d397-4a3c-b096-8ef39478a084
	I0707 16:09:07.918353   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:07.918358   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:07.918362   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:07.918368   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:07.918374   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:08 GMT
	I0707 16:09:07.918482   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:08.417444   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:08.417469   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.417482   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.417492   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.420892   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:08.420909   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.420917   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.420925   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.420931   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.420939   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.420946   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:08 GMT
	I0707 16:09:08.420953   32269 round_trippers.go:580]     Audit-Id: 5556eee3-2d34-47a2-ad1b-790ff7f7ea55
	I0707 16:09:08.421048   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1087","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0707 16:09:08.917516   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:08.917539   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.917556   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.917567   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.920889   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:08.920905   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.920913   32269 round_trippers.go:580]     Audit-Id: a5103e6e-13fe-491b-8534-5d1421099f21
	I0707 16:09:08.920920   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.920926   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.920933   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.920941   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.920948   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.921055   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:08.921301   32269 node_ready.go:49] node "multinode-136000" has status "Ready":"True"
	I0707 16:09:08.921311   32269 node_ready.go:38] duration metric: took 3.575656563s waiting for node "multinode-136000" to be "Ready" ...
	I0707 16:09:08.921318   32269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:08.921360   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:08.921367   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.921375   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.921383   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.924636   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:08.924646   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.924651   32269 round_trippers.go:580]     Audit-Id: aeb6daae-7dcd-49a1-9dc8-987167f24b30
	I0707 16:09:08.924656   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.924664   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.924672   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.924679   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.924688   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.925560   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1177"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84372 chars]
	I0707 16:09:08.927373   32269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:08.927410   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:08.927415   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.927422   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.927429   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.928989   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:08.928998   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.929005   32269 round_trippers.go:580]     Audit-Id: b57bd391-cbb3-4f54-96ec-36f7e50bb1e6
	I0707 16:09:08.929013   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.929021   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.929028   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.929040   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.929045   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.929120   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:08.929342   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:08.929349   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:08.929355   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:08.929360   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:08.930780   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:08.930788   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:08.930794   32269 round_trippers.go:580]     Audit-Id: c59edf6e-b53c-4627-a3e6-9d51400d6a1e
	I0707 16:09:08.930801   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:08.930808   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:08.930815   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:08.930825   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:08.930832   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:08.930950   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:09.433111   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:09.433137   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.433150   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.433162   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.436313   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:09.436328   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.436336   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.436343   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.436350   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.436356   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:09.436363   32269 round_trippers.go:580]     Audit-Id: 05f79faf-1f6b-4d67-8994-b05bb3ccaa45
	I0707 16:09:09.436370   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.436517   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:09.436878   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:09.436888   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.436896   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.436903   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.438350   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:09.438359   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.438365   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.438369   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.438374   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.438379   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:09 GMT
	I0707 16:09:09.438384   32269 round_trippers.go:580]     Audit-Id: 27be861d-b07f-43c5-a959-3899f8a8a652
	I0707 16:09:09.438389   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.438446   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:09.932452   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:09.932482   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.932538   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.932550   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.935561   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:09.935577   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.935585   32269 round_trippers.go:580]     Audit-Id: 50a05c04-1c74-4661-b3b3-5ac166dbbae3
	I0707 16:09:09.935592   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.935598   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.935604   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.935612   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.935620   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:09.935774   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:09.936139   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:09.936148   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:09.936156   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:09.936163   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:09.937863   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:09.937873   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:09.937879   32269 round_trippers.go:580]     Audit-Id: 66a9c8ae-7629-4b70-bb38-beba25c9312f
	I0707 16:09:09.937885   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:09.937891   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:09.937897   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:09.937902   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:09.937907   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:09.938015   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:10.432612   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:10.432638   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.432654   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.432665   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.435994   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:10.436010   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.436018   32269 round_trippers.go:580]     Audit-Id: 8841c67f-aec3-441d-af44-51aa13cf0655
	I0707 16:09:10.436024   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.436031   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.436037   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.436044   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.436051   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:10.436139   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:10.436494   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:10.436503   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.436512   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.436519   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.438160   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:10.438173   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.438188   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.438200   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.438205   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.438211   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:10 GMT
	I0707 16:09:10.438218   32269 round_trippers.go:580]     Audit-Id: 7495464c-757d-431c-95d3-e306da73db4a
	I0707 16:09:10.438226   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.438300   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:10.931542   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:10.931567   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.931581   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.931592   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.934654   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:10.934672   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.934680   32269 round_trippers.go:580]     Audit-Id: 18b01687-2efa-4026-9b3c-23fe058a245b
	I0707 16:09:10.934696   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.934705   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.934711   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.934724   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.934733   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:10.934865   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:10.935223   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:10.935232   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:10.935241   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:10.935248   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:10.936970   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:10.936979   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:10.936985   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:10.936990   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:10.936995   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:10.937006   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:10.937011   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:10.937017   32269 round_trippers.go:580]     Audit-Id: 41458f0c-3721-4c17-b633-2820dd891704
	I0707 16:09:10.937066   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:10.937234   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:11.431984   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:11.432010   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.432023   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.432033   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.434991   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:11.435007   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.435016   32269 round_trippers.go:580]     Audit-Id: fef492b0-7b5d-460b-8218-c6089ddc9054
	I0707 16:09:11.435023   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.435030   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.435037   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.435044   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.435051   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:11.435134   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:11.435492   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:11.435501   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.435509   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.435517   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.437250   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:11.437259   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.437264   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:11 GMT
	I0707 16:09:11.437269   32269 round_trippers.go:580]     Audit-Id: 7ce6b7ba-48c8-44f5-8a20-49d547046baf
	I0707 16:09:11.437275   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.437280   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.437285   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.437290   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.437538   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:11.932351   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:11.932378   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.932391   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.932402   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.935523   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:11.935541   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.935549   32269 round_trippers.go:580]     Audit-Id: c1a1d5cc-3629-47bd-9825-d61518b052aa
	I0707 16:09:11.935569   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.935576   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.935587   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.935597   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.935604   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:11.935679   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:11.936034   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:11.936043   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:11.936051   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:11.936058   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:11.937618   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:11.937628   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:11.937636   32269 round_trippers.go:580]     Audit-Id: c5142700-cf4c-4959-84ee-3e4645a9a60c
	I0707 16:09:11.937644   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:11.937652   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:11.937659   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:11.937664   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:11.937669   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:11.937868   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:12.431654   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:12.431681   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.431696   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.431747   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.434746   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:12.434765   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.434774   32269 round_trippers.go:580]     Audit-Id: ae4a2629-49db-4a84-8fae-27809d5a52fb
	I0707 16:09:12.434781   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.434787   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.434794   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.434801   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.434807   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:12.434910   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:12.435267   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:12.435276   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.435284   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.435291   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.437061   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:12.437069   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.437078   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.437086   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:12 GMT
	I0707 16:09:12.437093   32269 round_trippers.go:580]     Audit-Id: 48abb8da-cc55-45a5-9448-cc63020ff9a0
	I0707 16:09:12.437100   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.437109   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.437118   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.437210   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:12.932501   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:12.932526   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.932539   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.932549   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.935507   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:12.935521   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.935529   32269 round_trippers.go:580]     Audit-Id: a0dfdc7d-f691-4e05-8897-98f652c1e583
	I0707 16:09:12.935536   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.935542   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.935549   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.935556   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.935564   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:12.935648   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:12.936000   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:12.936009   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:12.936017   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:12.936028   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:12.937567   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:12.937576   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:12.937581   32269 round_trippers.go:580]     Audit-Id: 48ec8c46-fcc2-46a1-a997-5013dc13270d
	I0707 16:09:12.937586   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:12.937592   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:12.937600   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:12.937607   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:12.937613   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:12.937701   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:12.937871   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:13.432501   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:13.432525   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.432537   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.432548   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.435570   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:13.435579   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.435585   32269 round_trippers.go:580]     Audit-Id: 911352b7-2cd8-4110-8bc7-135549acd44a
	I0707 16:09:13.435591   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.435596   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.435602   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.435607   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.435612   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:13.435724   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:13.436002   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:13.436009   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.436015   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.436020   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.437629   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:13.437638   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.437643   32269 round_trippers.go:580]     Audit-Id: 35fa1c30-ffb4-4e3d-99e6-8b3f4e6b8cfd
	I0707 16:09:13.437648   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.437654   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.437659   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.437679   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.437687   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:13 GMT
	I0707 16:09:13.437882   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:13.931603   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:13.931618   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.931625   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.931631   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.934547   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:13.934558   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.934563   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.934569   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.934574   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:13.934579   32269 round_trippers.go:580]     Audit-Id: 8c3938fd-b5ae-43a7-a2f8-53654815dfb8
	I0707 16:09:13.934584   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.934589   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.936405   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:13.936687   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:13.936694   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:13.936700   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:13.936706   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:13.938712   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:13.938721   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:13.938728   32269 round_trippers.go:580]     Audit-Id: c29744ca-348e-4f19-8c6d-a3ac237cf76f
	I0707 16:09:13.938733   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:13.938739   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:13.938744   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:13.938749   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:13.938756   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:13.938954   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:14.432651   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:14.432673   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.432685   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.432696   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.436182   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:14.436199   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.436207   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.436214   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.436220   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:14.436229   32269 round_trippers.go:580]     Audit-Id: 96bb1766-d5e8-4489-9dd0-d38d7b956d85
	I0707 16:09:14.436235   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.436242   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.436604   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:14.436962   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:14.436971   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.436980   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.436992   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.438689   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:14.438698   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.438704   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.438710   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.438716   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:14 GMT
	I0707 16:09:14.438721   32269 round_trippers.go:580]     Audit-Id: 1b8feaa0-612a-47e6-8289-51c0903debf1
	I0707 16:09:14.438726   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.438731   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.438822   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:14.931642   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:14.931673   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.931733   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.931747   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.934772   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:14.934788   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.934796   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:14.934802   32269 round_trippers.go:580]     Audit-Id: a6238cf3-29ab-4906-bcfb-9e1e12f00304
	I0707 16:09:14.934809   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.934816   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.934822   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.934830   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.934894   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:14.935245   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:14.935253   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:14.935261   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:14.935268   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:14.936732   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:14.936742   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:14.936748   32269 round_trippers.go:580]     Audit-Id: 564aa9cf-8fa0-4aef-a45c-79a27c332c56
	I0707 16:09:14.936756   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:14.936764   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:14.936770   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:14.936776   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:14.936781   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:14.936857   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:15.431433   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:15.431450   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.431459   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.431466   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.433691   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:15.433704   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.433713   32269 round_trippers.go:580]     Audit-Id: 764f8f09-d899-48da-b5ac-a7796611b65d
	I0707 16:09:15.433723   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.433732   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.433737   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.433743   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.433748   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:15.433825   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:15.434105   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:15.434112   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.434118   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.434123   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.435416   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:15.435423   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.435428   32269 round_trippers.go:580]     Audit-Id: 1ec0e114-9d51-47c2-aa2d-0b057bf236a7
	I0707 16:09:15.435432   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.435437   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.435441   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.435447   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.435453   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:15 GMT
	I0707 16:09:15.435548   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:15.435726   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:15.931589   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:15.931614   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.931662   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.931677   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.934599   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:15.934614   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.934624   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.934634   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.934646   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.934661   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:15.934670   32269 round_trippers.go:580]     Audit-Id: 248ae3aa-ff53-4bd6-bc2b-dcba9f2f9df1
	I0707 16:09:15.934676   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.934745   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:15.935106   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:15.935115   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:15.935123   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:15.935130   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:15.936831   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:15.936841   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:15.936847   32269 round_trippers.go:580]     Audit-Id: 2ac834bd-f7b2-4dc9-8f62-463d7e5d3489
	I0707 16:09:15.936852   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:15.936860   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:15.936867   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:15.936871   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:15.936876   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:15.936943   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:16.432998   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:16.433020   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.433032   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.433042   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.436191   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:16.436206   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.436214   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.436221   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.436227   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:16.436234   32269 round_trippers.go:580]     Audit-Id: ecb7f4df-3d34-472e-99bd-f3e0dc86427f
	I0707 16:09:16.436241   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.436248   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.436316   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:16.436681   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:16.436690   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.436698   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.436705   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.438341   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:16.438350   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.438356   32269 round_trippers.go:580]     Audit-Id: 649de989-8499-49db-a07a-caaf90422dba
	I0707 16:09:16.438362   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.438367   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.438372   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.438378   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.438384   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:16 GMT
	I0707 16:09:16.438444   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:16.931921   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:16.931948   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.931962   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.931973   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.935171   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:16.935187   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.935195   32269 round_trippers.go:580]     Audit-Id: c2b8a445-8f16-45a6-ac73-459a720f8539
	I0707 16:09:16.935202   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.935209   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.935215   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.935223   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.935229   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:16.935314   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:16.935671   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:16.935679   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:16.935687   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:16.935695   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:16.937057   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:16.937074   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:16.937083   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:16.937090   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:16.937096   32269 round_trippers.go:580]     Audit-Id: c0154ed3-e161-4fc3-87d7-4f78b0586987
	I0707 16:09:16.937101   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:16.937106   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:16.937112   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:16.937245   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:17.432699   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:17.432718   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.432727   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.432735   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.434722   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.434733   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.434739   32269 round_trippers.go:580]     Audit-Id: 0915b4ce-6357-46e4-a52f-98bef632f8f5
	I0707 16:09:17.434745   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.434750   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.434755   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.434761   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.434765   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:17.434810   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:17.435083   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:17.435089   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.435095   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.435101   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.436935   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.436945   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.436951   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.436956   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.436961   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:17 GMT
	I0707 16:09:17.436965   32269 round_trippers.go:580]     Audit-Id: f1de9b69-c0f9-4fd1-a58d-58791b866e84
	I0707 16:09:17.436971   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.436975   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.437026   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:17.437206   32269 pod_ready.go:102] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"False"
	I0707 16:09:17.931424   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:17.931440   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.931447   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.931452   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.933230   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.933243   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.933250   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.933254   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.933259   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:17.933264   32269 round_trippers.go:580]     Audit-Id: d4885c3b-0ad6-454f-bc77-620c39ffebf1
	I0707 16:09:17.933275   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.933280   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.933332   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:17.933605   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:17.933611   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:17.933617   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:17.933622   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:17.934975   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:17.934984   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:17.934990   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:17.934995   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:17.935004   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:17.935010   32269 round_trippers.go:580]     Audit-Id: 35d23c13-0cff-4da5-90ec-b90566a02d1f
	I0707 16:09:17.935017   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:17.935024   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:17.935169   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:18.431536   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:18.431561   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.431574   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.431584   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.435141   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:18.435157   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.435168   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.435179   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:18.435195   32269 round_trippers.go:580]     Audit-Id: 0d8f7c27-02e5-4e86-a2ae-e34db6a25ab0
	I0707 16:09:18.435208   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.435219   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.435228   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.435437   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:18.435732   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:18.435738   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.435744   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.435750   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.437192   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:18.437201   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.437207   32269 round_trippers.go:580]     Audit-Id: 32766576-5e01-4468-86fe-d463fd4040f8
	I0707 16:09:18.437212   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.437221   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.437228   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.437234   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.437239   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:18 GMT
	I0707 16:09:18.437374   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:18.932046   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:18.932072   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.932131   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.932145   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.935241   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:18.935257   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.935265   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.935272   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:18.935278   32269 round_trippers.go:580]     Audit-Id: a39fbe14-1675-4ffb-a81e-bfff111060ad
	I0707 16:09:18.935285   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.935292   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.935301   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.935465   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1093","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0707 16:09:18.935822   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:18.935831   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:18.935839   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:18.935846   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:18.937864   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:18.937873   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:18.937878   32269 round_trippers.go:580]     Audit-Id: 66e10f6e-51c7-42db-866f-bd7ffe368343
	I0707 16:09:18.937884   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:18.937896   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:18.937908   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:18.937915   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:18.937924   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:18.938179   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.433520   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-78qmb
	I0707 16:09:19.433549   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.433561   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.433571   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.436876   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:19.436892   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.436899   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.436906   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.436914   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.436934   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.436944   32269 round_trippers.go:580]     Audit-Id: 61836a81-b6fe-4aa6-8591-6c4841dbebd6
	I0707 16:09:19.436954   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.437210   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1214","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0707 16:09:19.437577   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.437586   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.437595   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.437603   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.439711   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.439720   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.439725   32269 round_trippers.go:580]     Audit-Id: 5592298c-8923-4186-af8f-c0a8cd2c6d4c
	I0707 16:09:19.439730   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.439736   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.439740   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.439746   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.439751   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.439869   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.440069   32269 pod_ready.go:92] pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.440090   32269 pod_ready.go:81] duration metric: took 10.512477733s waiting for pod "coredns-5d78c9869d-78qmb" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.440110   32269 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.440136   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-136000
	I0707 16:09:19.440140   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.440146   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.440152   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.441771   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.441779   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.441784   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.441789   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.441794   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.441798   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.441803   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.441808   32269 round_trippers.go:580]     Audit-Id: 75e3b5ac-71ec-4e41-834d-6e293dba8b29
	I0707 16:09:19.441995   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-136000","namespace":"kube-system","uid":"636b837f-c544-4688-aa2b-2f602c1546c6","resourceVersion":"1178","creationTimestamp":"2023-07-07T23:02:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.55:2379","kubernetes.io/config.hash":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.mirror":"8669674c45323598ebbb888fff5e6cb4","kubernetes.io/config.seen":"2023-07-07T23:02:20.447968150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
	I0707 16:09:19.442194   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.442201   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.442206   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.442212   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.443537   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.443547   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.443556   32269 round_trippers.go:580]     Audit-Id: d39784bc-5c1b-4b72-85cd-1f3c8b625936
	I0707 16:09:19.443564   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.443570   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.443576   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.443584   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.443591   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.443712   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.443881   32269 pod_ready.go:92] pod "etcd-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.443888   32269 pod_ready.go:81] duration metric: took 3.772999ms waiting for pod "etcd-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.443898   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.443924   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-136000
	I0707 16:09:19.443928   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.443934   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.443941   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.446052   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.446075   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.446089   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.446100   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.446115   32269 round_trippers.go:580]     Audit-Id: 99d08dbf-6a1d-4f21-b23c-7ea354c9d6b0
	I0707 16:09:19.446123   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.446128   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.446133   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.446247   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-136000","namespace":"kube-system","uid":"e33f6220-5f99-43a2-adc8-49399f82e89c","resourceVersion":"1199","creationTimestamp":"2023-07-07T23:02:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.55:8443","kubernetes.io/config.hash":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.mirror":"10d234a603360886d3e49d7f2ebd7116","kubernetes.io/config.seen":"2023-07-07T23:02:20.447888975Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7609 chars]
	I0707 16:09:19.446578   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.446588   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.446595   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.446603   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.449003   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.449021   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.449030   32269 round_trippers.go:580]     Audit-Id: f4be0bd7-5a3c-4d27-a16b-7c6638768a5b
	I0707 16:09:19.449039   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.449046   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.449069   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.449081   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.449090   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.449474   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.449724   32269 pod_ready.go:92] pod "kube-apiserver-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.449759   32269 pod_ready.go:81] duration metric: took 5.830633ms waiting for pod "kube-apiserver-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.449772   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.449815   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-136000
	I0707 16:09:19.449824   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.449833   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.449840   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.451637   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.451653   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.451662   32269 round_trippers.go:580]     Audit-Id: 45d78963-f0d4-4f78-b593-c5cc0bb56701
	I0707 16:09:19.451671   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.451679   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.451688   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.451697   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.451705   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.451840   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-136000","namespace":"kube-system","uid":"a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9","resourceVersion":"1184","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.mirror":"8b16ffd443c4ff5953586fb0655a6320","kubernetes.io/config.seen":"2023-07-07T23:02:28.360407979Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0707 16:09:19.452176   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:19.452186   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.452195   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.452204   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.453861   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.453876   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.453884   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.453892   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.453900   32269 round_trippers.go:580]     Audit-Id: 19e52873-4bf7-45bd-af76-b73a57357cd2
	I0707 16:09:19.453908   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.453916   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.453923   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.454010   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:19.454234   32269 pod_ready.go:92] pod "kube-controller-manager-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.454249   32269 pod_ready.go:81] duration metric: took 4.465381ms waiting for pod "kube-controller-manager-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.454262   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.454298   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5865g
	I0707 16:09:19.454304   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.454314   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.454323   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.456165   32269 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0707 16:09:19.456180   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.456195   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.456211   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.456226   32269 round_trippers.go:580]     Audit-Id: e5df0ccd-637e-4c21-9862-71ca42d71c70
	I0707 16:09:19.456240   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.456253   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.456264   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.456357   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5865g","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b0f7832-d4d7-41e7-ab55-08284cf98427","resourceVersion":"1059","creationTimestamp":"2023-07-07T23:04:00Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0707 16:09:19.456666   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m03
	I0707 16:09:19.456674   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.456681   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.456688   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.458334   32269 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0707 16:09:19.458345   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.458356   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.458363   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.458371   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.458379   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.458391   32269 round_trippers.go:580]     Content-Length: 210
	I0707 16:09:19.458398   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.458407   32269 round_trippers.go:580]     Audit-Id: c6049222-1fa0-4adf-9a3e-1be692a21797
	I0707 16:09:19.458420   32269 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-136000-m03\" not found","reason":"NotFound","details":{"name":"multinode-136000-m03","kind":"nodes"},"code":404}
	I0707 16:09:19.458478   32269 pod_ready.go:97] node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:19.458487   32269 pod_ready.go:81] duration metric: took 4.218638ms waiting for pod "kube-proxy-5865g" in "kube-system" namespace to be "Ready" ...
	E0707 16:09:19.458493   32269 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-136000-m03" hosting pod "kube-proxy-5865g" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-136000-m03": nodes "multinode-136000-m03" not found
	I0707 16:09:19.458500   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.633890   32269 request.go:628] Waited for 175.258813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:19.633950   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvrg9
	I0707 16:09:19.633961   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.633974   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.633985   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.636841   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.636878   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.636888   32269 round_trippers.go:580]     Audit-Id: f3c0ea1d-a148-4b4c-9a4f-d6e5058c361f
	I0707 16:09:19.636898   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.636906   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.636918   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.636926   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.636933   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.637088   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvrg9","generateName":"kube-proxy-","namespace":"kube-system","uid":"f7473507-c702-444e-b727-71c8a8cc4c08","resourceVersion":"936","creationTimestamp":"2023-07-07T23:03:17Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I0707 16:09:19.835005   32269 request.go:628] Waited for 197.576904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:19.835132   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000-m02
	I0707 16:09:19.835144   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:19.835157   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:19.835168   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:19.838052   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:19.838068   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:19.838076   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:19.838083   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:19.838089   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:19.838096   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:19.838103   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:19 GMT
	I0707 16:09:19.838110   32269 round_trippers.go:580]     Audit-Id: 23b06935-e725-4b96-81e6-424fc0c4c00b
	I0707 16:09:19.838201   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000-m02","uid":"e53ac27c-579d-4edc-87f1-2f80a931d265","resourceVersion":"955","creationTimestamp":"2023-07-07T23:06:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:06:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3252 chars]
	I0707 16:09:19.838409   32269 pod_ready.go:92] pod "kube-proxy-dvrg9" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:19.838418   32269 pod_ready.go:81] duration metric: took 379.898672ms waiting for pod "kube-proxy-dvrg9" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:19.838428   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.034063   32269 request.go:628] Waited for 195.585239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:20.034140   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wd4p8
	I0707 16:09:20.034151   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.034163   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.034177   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.036997   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:20.037013   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.037023   32269 round_trippers.go:580]     Audit-Id: 9aa64d3f-45a5-4cdb-9cb6-a52526f27641
	I0707 16:09:20.037035   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.037044   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.037054   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.037062   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.037069   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.037259   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wd4p8","generateName":"kube-proxy-","namespace":"kube-system","uid":"4979ea40-a983-4f80-b7ac-f6e05cd5f6b4","resourceVersion":"1101","creationTimestamp":"2023-07-07T23:02:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"40ec798e-383e-4e94-b5d5-10fc13347c1a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ec798e-383e-4e94-b5d5-10fc13347c1a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0707 16:09:20.233825   32269 request.go:628] Waited for 196.195161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.233875   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.233884   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.233933   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.233948   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.237003   32269 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0707 16:09:20.237026   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.237036   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.237047   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.237076   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.237088   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.237098   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.237105   32269 round_trippers.go:580]     Audit-Id: ef0a30c5-bfcf-4638-8f73-b602910b21c4
	I0707 16:09:20.237256   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:20.237505   32269 pod_ready.go:92] pod "kube-proxy-wd4p8" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:20.237516   32269 pod_ready.go:81] duration metric: took 399.073383ms waiting for pod "kube-proxy-wd4p8" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.237528   32269 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.433720   32269 request.go:628] Waited for 196.064742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:20.433770   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-136000
	I0707 16:09:20.433779   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.433792   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.433805   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.436742   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:20.436766   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.436776   32269 round_trippers.go:580]     Audit-Id: a3bf8398-c656-4a6b-b611-4440b55f37c0
	I0707 16:09:20.436786   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.436795   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.436802   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.436809   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.436818   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.436977   32269 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-136000","namespace":"kube-system","uid":"90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e","resourceVersion":"1197","creationTimestamp":"2023-07-07T23:02:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.mirror":"81a87492d868eacbd03c1d020dad533c","kubernetes.io/config.seen":"2023-07-07T23:02:28.360408566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0707 16:09:20.635356   32269 request.go:628] Waited for 198.075675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.635433   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes/multinode-136000
	I0707 16:09:20.635443   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.635457   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.635471   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.638427   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:20.638448   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.638460   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.638468   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.638475   32269 round_trippers.go:580]     Audit-Id: 6443f5a9-c0c8-4258-a0d2-2fa51f1d4bfe
	I0707 16:09:20.638481   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.638489   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.638495   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.638688   32269 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-07T23:02:25Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0707 16:09:20.638944   32269 pod_ready.go:92] pod "kube-scheduler-multinode-136000" in "kube-system" namespace has status "Ready":"True"
	I0707 16:09:20.638955   32269 pod_ready.go:81] duration metric: took 401.410417ms waiting for pod "kube-scheduler-multinode-136000" in "kube-system" namespace to be "Ready" ...
	I0707 16:09:20.638965   32269 pod_ready.go:38] duration metric: took 11.717381449s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0707 16:09:20.638979   32269 api_server.go:52] waiting for apiserver process to appear ...
	I0707 16:09:20.639063   32269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:09:20.647618   32269 command_runner.go:130] > 1700
	I0707 16:09:20.647726   32269 api_server.go:72] duration metric: took 15.354762993s to wait for apiserver process to appear ...
	I0707 16:09:20.647734   32269 api_server.go:88] waiting for apiserver healthz status ...
	I0707 16:09:20.647743   32269 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:09:20.651169   32269 api_server.go:279] https://192.168.64.55:8443/healthz returned 200:
	ok
	I0707 16:09:20.651197   32269 round_trippers.go:463] GET https://192.168.64.55:8443/version
	I0707 16:09:20.651201   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.651208   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.651214   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.651929   32269 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0707 16:09:20.651940   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.651946   32269 round_trippers.go:580]     Content-Length: 263
	I0707 16:09:20.651951   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.651957   32269 round_trippers.go:580]     Audit-Id: 4de76027-d734-4997-963b-e1d382aa8cdc
	I0707 16:09:20.651961   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.651966   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.651972   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.651976   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.651985   32269 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0707 16:09:20.652006   32269 api_server.go:141] control plane version: v1.27.3
	I0707 16:09:20.652013   32269 api_server.go:131] duration metric: took 4.275057ms to wait for apiserver health ...
	I0707 16:09:20.652017   32269 system_pods.go:43] waiting for kube-system pods to appear ...
	I0707 16:09:20.835622   32269 request.go:628] Waited for 183.544367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:20.835721   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:20.835757   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:20.835770   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:20.835782   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:20.844240   32269 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0707 16:09:20.844254   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:20.844260   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:20.844294   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:20 GMT
	I0707 16:09:20.844300   32269 round_trippers.go:580]     Audit-Id: 5b219ec6-46a4-48ec-9b7f-caf71bccf436
	I0707 16:09:20.844305   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:20.844309   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:20.844315   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:20.845669   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1214","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I0707 16:09:20.847522   32269 system_pods.go:59] 12 kube-system pods found
	I0707 16:09:20.847533   32269 system_pods.go:61] "coredns-5d78c9869d-78qmb" [d9671f13-fa08-4161-b216-53f645b9a1c1] Running
	I0707 16:09:20.847537   32269 system_pods.go:61] "etcd-multinode-136000" [636b837f-c544-4688-aa2b-2f602c1546c6] Running
	I0707 16:09:20.847540   32269 system_pods.go:61] "kindnet-gj2vg" [596c8647-685e-449c-86c0-9aeb7dddb2f5] Running
	I0707 16:09:20.847544   32269 system_pods.go:61] "kindnet-h8rpq" [30c883b3-9941-48da-a543-d1649a5418f9] Running
	I0707 16:09:20.847556   32269 system_pods.go:61] "kindnet-zpx7k" [179bc03c-a64f-48bc-9bb9-52e5c91e5037] Running
	I0707 16:09:20.847562   32269 system_pods.go:61] "kube-apiserver-multinode-136000" [e33f6220-5f99-43a2-adc8-49399f82e89c] Running
	I0707 16:09:20.847566   32269 system_pods.go:61] "kube-controller-manager-multinode-136000" [a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9] Running
	I0707 16:09:20.847570   32269 system_pods.go:61] "kube-proxy-5865g" [3b0f7832-d4d7-41e7-ab55-08284cf98427] Running
	I0707 16:09:20.847574   32269 system_pods.go:61] "kube-proxy-dvrg9" [f7473507-c702-444e-b727-71c8a8cc4c08] Running
	I0707 16:09:20.847577   32269 system_pods.go:61] "kube-proxy-wd4p8" [4979ea40-a983-4f80-b7ac-f6e05cd5f6b4] Running
	I0707 16:09:20.847581   32269 system_pods.go:61] "kube-scheduler-multinode-136000" [90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e] Running
	I0707 16:09:20.847584   32269 system_pods.go:61] "storage-provisioner" [e617383f-c16f-44a7-a1a4-a2813ecc84f2] Running
	I0707 16:09:20.847589   32269 system_pods.go:74] duration metric: took 195.563798ms to wait for pod list to return data ...
	I0707 16:09:20.847594   32269 default_sa.go:34] waiting for default service account to be created ...
	I0707 16:09:21.034242   32269 request.go:628] Waited for 186.58916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/default/serviceaccounts
	I0707 16:09:21.034366   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/default/serviceaccounts
	I0707 16:09:21.034379   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:21.034393   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:21.034404   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:21.037027   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:21.037043   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:21.037052   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:21.037059   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:21.037067   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:21.037074   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:21.037081   32269 round_trippers.go:580]     Content-Length: 262
	I0707 16:09:21.037093   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:21 GMT
	I0707 16:09:21.037101   32269 round_trippers.go:580]     Audit-Id: b3c0b5b1-8a28-4670-bd11-f77f16c1caf4
	I0707 16:09:21.037115   32269 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1219"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5aa1743c-9d67-48b0-a877-1b6e8e0c8ed0","resourceVersion":"299","creationTimestamp":"2023-07-07T23:02:40Z"}}]}
	I0707 16:09:21.037254   32269 default_sa.go:45] found service account: "default"
	I0707 16:09:21.037265   32269 default_sa.go:55] duration metric: took 189.661575ms for default service account to be created ...
	I0707 16:09:21.037272   32269 system_pods.go:116] waiting for k8s-apps to be running ...
	I0707 16:09:21.234447   32269 request.go:628] Waited for 197.091758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:21.234545   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/namespaces/kube-system/pods
	I0707 16:09:21.234556   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:21.234567   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:21.234578   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:21.238683   32269 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0707 16:09:21.238693   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:21.238715   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:21.238732   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:21.238746   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:21.238756   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:21 GMT
	I0707 16:09:21.238762   32269 round_trippers.go:580]     Audit-Id: 2be8b427-461e-4901-a591-9b649e1aa7ab
	I0707 16:09:21.238771   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:21.239733   32269 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1219"},"items":[{"metadata":{"name":"coredns-5d78c9869d-78qmb","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d9671f13-fa08-4161-b216-53f645b9a1c1","resourceVersion":"1214","creationTimestamp":"2023-07-07T23:02:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"836020f5-ea90-4d2a-8bb8-68513602c2cc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-07T23:02:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"836020f5-ea90-4d2a-8bb8-68513602c2cc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I0707 16:09:21.242370   32269 system_pods.go:86] 12 kube-system pods found
	I0707 16:09:21.242387   32269 system_pods.go:89] "coredns-5d78c9869d-78qmb" [d9671f13-fa08-4161-b216-53f645b9a1c1] Running
	I0707 16:09:21.242392   32269 system_pods.go:89] "etcd-multinode-136000" [636b837f-c544-4688-aa2b-2f602c1546c6] Running
	I0707 16:09:21.242396   32269 system_pods.go:89] "kindnet-gj2vg" [596c8647-685e-449c-86c0-9aeb7dddb2f5] Running
	I0707 16:09:21.242400   32269 system_pods.go:89] "kindnet-h8rpq" [30c883b3-9941-48da-a543-d1649a5418f9] Running
	I0707 16:09:21.242404   32269 system_pods.go:89] "kindnet-zpx7k" [179bc03c-a64f-48bc-9bb9-52e5c91e5037] Running
	I0707 16:09:21.242408   32269 system_pods.go:89] "kube-apiserver-multinode-136000" [e33f6220-5f99-43a2-adc8-49399f82e89c] Running
	I0707 16:09:21.242412   32269 system_pods.go:89] "kube-controller-manager-multinode-136000" [a4c59edf-0147-4ae9-a3d0-b7559b3ab6c9] Running
	I0707 16:09:21.242416   32269 system_pods.go:89] "kube-proxy-5865g" [3b0f7832-d4d7-41e7-ab55-08284cf98427] Running
	I0707 16:09:21.242420   32269 system_pods.go:89] "kube-proxy-dvrg9" [f7473507-c702-444e-b727-71c8a8cc4c08] Running
	I0707 16:09:21.242424   32269 system_pods.go:89] "kube-proxy-wd4p8" [4979ea40-a983-4f80-b7ac-f6e05cd5f6b4] Running
	I0707 16:09:21.242427   32269 system_pods.go:89] "kube-scheduler-multinode-136000" [90cc3143-cca1-4ac0-9c0a-0bfce8a8d99e] Running
	I0707 16:09:21.242434   32269 system_pods.go:89] "storage-provisioner" [e617383f-c16f-44a7-a1a4-a2813ecc84f2] Running
	I0707 16:09:21.242438   32269 system_pods.go:126] duration metric: took 205.156757ms to wait for k8s-apps to be running ...
	I0707 16:09:21.242443   32269 system_svc.go:44] waiting for kubelet service to be running ....
	I0707 16:09:21.242494   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0707 16:09:21.251138   32269 system_svc.go:56] duration metric: took 8.690199ms WaitForService to wait for kubelet.
	I0707 16:09:21.251150   32269 kubeadm.go:581] duration metric: took 15.958174317s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0707 16:09:21.251176   32269 node_conditions.go:102] verifying NodePressure condition ...
	I0707 16:09:21.433681   32269 request.go:628] Waited for 182.449596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.55:8443/api/v1/nodes
	I0707 16:09:21.433763   32269 round_trippers.go:463] GET https://192.168.64.55:8443/api/v1/nodes
	I0707 16:09:21.433774   32269 round_trippers.go:469] Request Headers:
	I0707 16:09:21.433786   32269 round_trippers.go:473]     Accept: application/json, */*
	I0707 16:09:21.433800   32269 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0707 16:09:21.436791   32269 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0707 16:09:21.436807   32269 round_trippers.go:577] Response Headers:
	I0707 16:09:21.436822   32269 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6f7bf8b-796b-46b3-afaa-35620b615199
	I0707 16:09:21.436831   32269 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8805c3f-a6ab-4661-9a18-efc8c282cc14
	I0707 16:09:21.436838   32269 round_trippers.go:580]     Date: Fri, 07 Jul 2023 23:09:21 GMT
	I0707 16:09:21.436844   32269 round_trippers.go:580]     Audit-Id: bf79a6fc-dad2-4cfe-baa5-67e5bcb57fbc
	I0707 16:09:21.436852   32269 round_trippers.go:580]     Cache-Control: no-cache, private
	I0707 16:09:21.436859   32269 round_trippers.go:580]     Content-Type: application/json
	I0707 16:09:21.437146   32269 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1220"},"items":[{"metadata":{"name":"multinode-136000","uid":"1919e0f5-12d4-4d1c-ab84-e8b4d7389a46","resourceVersion":"1177","creationTimestamp":"2023-07-07T23:02:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-136000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794","minikube.k8s.io/name":"multinode-136000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_07T16_02_29_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9472 chars]
	I0707 16:09:21.437531   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:21.437544   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:21.437552   32269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0707 16:09:21.437557   32269 node_conditions.go:123] node cpu capacity is 2
	I0707 16:09:21.437562   32269 node_conditions.go:105] duration metric: took 186.376949ms to run NodePressure ...
	I0707 16:09:21.437571   32269 start.go:228] waiting for startup goroutines ...
	I0707 16:09:21.437579   32269 start.go:233] waiting for cluster config update ...
	I0707 16:09:21.437586   32269 start.go:242] writing updated cluster config ...
	I0707 16:09:21.438319   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:09:21.438414   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:09:21.482320   32269 out.go:177] * Starting worker node multinode-136000-m02 in cluster multinode-136000
	I0707 16:09:21.503950   32269 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 16:09:21.503973   32269 cache.go:57] Caching tarball of preloaded images
	I0707 16:09:21.504122   32269 preload.go:174] Found /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0707 16:09:21.504131   32269 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0707 16:09:21.504216   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:09:21.504736   32269 start.go:365] acquiring machines lock for multinode-136000-m02: {Name:mk81f6152b3f423bf222fad0025fe3c8ddb3ea12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0707 16:09:21.504789   32269 start.go:369] acquired machines lock for "multinode-136000-m02" in 39.658µs
	I0707 16:09:21.504811   32269 start.go:96] Skipping create...Using existing machine configuration
	I0707 16:09:21.504815   32269 fix.go:54] fixHost starting: m02
	I0707 16:09:21.505123   32269 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:09:21.505136   32269 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:09:21.512234   32269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49243
	I0707 16:09:21.512566   32269 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:09:21.512972   32269 main.go:141] libmachine: Using API Version  1
	I0707 16:09:21.512995   32269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:09:21.513198   32269 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:09:21.513330   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:21.513429   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetState
	I0707 16:09:21.513504   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:09:21.513576   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid from json: 32151
	I0707 16:09:21.514541   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid 32151 missing from process table
	I0707 16:09:21.514568   32269 fix.go:102] recreateIfNeeded on multinode-136000-m02: state=Stopped err=<nil>
	I0707 16:09:21.514581   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	W0707 16:09:21.514666   32269 fix.go:128] unexpected machine state, will restart: <nil>
	I0707 16:09:21.537173   32269 out.go:177] * Restarting existing hyperkit VM for "multinode-136000-m02" ...
	I0707 16:09:21.579228   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .Start
	I0707 16:09:21.579480   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:09:21.579562   32269 main.go:141] libmachine: (multinode-136000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid
	I0707 16:09:21.581299   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid 32151 missing from process table
	I0707 16:09:21.581312   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | pid 32151 is in state "Stopped"
	I0707 16:09:21.581331   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid...
	I0707 16:09:21.581521   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Using UUID 671813f0-1d1a-11ee-8196-149d997f80ea
	I0707 16:09:21.611201   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Generated MAC b2:4b:8:0:c2:14
	I0707 16:09:21.611226   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000
	I0707 16:09:21.611360   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"671813f0-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004e8930)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLin
e:"", process:(*os.Process)(nil)}
	I0707 16:09:21.611389   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"671813f0-1d1a-11ee-8196-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004e8930)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLin
e:"", process:(*os.Process)(nil)}
	I0707 16:09:21.611509   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "671813f0-1d1a-11ee-8196-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/multinode-136000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage,/U
sers/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"}
	I0707 16:09:21.611562   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 671813f0-1d1a-11ee-8196-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/multinode-136000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/tty,log=/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/bzimage,/Users/jenkins/minikube-integration/16845-29196/.minikube/machin
es/multinode-136000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-136000"
	I0707 16:09:21.611576   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0707 16:09:21.612846   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 DEBUG: hyperkit: Pid is 32313
	I0707 16:09:21.613206   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Attempt 0
	I0707 16:09:21.613225   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:09:21.613270   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid from json: 32313
	I0707 16:09:21.614973   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Searching for b2:4b:8:0:c2:14 in /var/db/dhcpd_leases ...
	I0707 16:09:21.615055   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Found 56 entries in /var/db/dhcpd_leases!
	I0707 16:09:21.615071   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.55 HWAddress:66:77:10:3:27:1c ID:1,66:77:10:3:27:1c Lease:0x64a9ec75}
	I0707 16:09:21.615078   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.57 HWAddress:e2:5d:8d:f1:83:3b ID:1,e2:5d:8d:f1:83:3b Lease:0x64a89ada}
	I0707 16:09:21.615090   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.56 HWAddress:b2:4b:8:0:c2:14 ID:1,b2:4b:8:0:c2:14 Lease:0x64a9ebeb}
	I0707 16:09:21.615101   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | Found match: b2:4b:8:0:c2:14
	I0707 16:09:21.615110   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | IP: 192.168.64.56
	I0707 16:09:21.615130   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetConfigRaw
	I0707 16:09:21.615654   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetIP
	I0707 16:09:21.615844   32269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/multinode-136000/config.json ...
	I0707 16:09:21.616134   32269 machine.go:88] provisioning docker machine ...
	I0707 16:09:21.616144   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:21.616252   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetMachineName
	I0707 16:09:21.616335   32269 buildroot.go:166] provisioning hostname "multinode-136000-m02"
	I0707 16:09:21.616347   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetMachineName
	I0707 16:09:21.616427   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:21.616527   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:21.616608   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:21.616692   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:21.616801   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:21.616940   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:21.617269   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:21.617281   32269 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-136000-m02 && echo "multinode-136000-m02" | sudo tee /etc/hostname
	I0707 16:09:21.619337   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0707 16:09:21.627200   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0707 16:09:21.628054   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:09:21.628070   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:09:21.628080   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:09:21.628093   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:09:21.993338   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0707 16:09:21.993353   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0707 16:09:22.097423   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0707 16:09:22.097444   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0707 16:09:22.097455   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0707 16:09:22.097465   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0707 16:09:22.098331   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0707 16:09:22.098340   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0707 16:09:26.921488   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0707 16:09:26.921567   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0707 16:09:26.921583   32269 main.go:141] libmachine: (multinode-136000-m02) DBG | 2023/07/07 16:09:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0707 16:09:56.714714   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-136000-m02
	
	I0707 16:09:56.714729   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.714866   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:56.714965   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.715045   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.715146   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:56.715297   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:56.715609   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:56.715621   32269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-136000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-136000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-136000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0707 16:09:56.796953   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0707 16:09:56.796979   32269 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16845-29196/.minikube CaCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16845-29196/.minikube}
	I0707 16:09:56.796991   32269 buildroot.go:174] setting up certificates
	I0707 16:09:56.796999   32269 provision.go:83] configureAuth start
	I0707 16:09:56.797006   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetMachineName
	I0707 16:09:56.797147   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetIP
	I0707 16:09:56.797238   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.797325   32269 provision.go:138] copyHostCerts
	I0707 16:09:56.797370   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:09:56.797424   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem, removing ...
	I0707 16:09:56.797429   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem
	I0707 16:09:56.797544   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/ca.pem (1082 bytes)
	I0707 16:09:56.797719   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:09:56.797761   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem, removing ...
	I0707 16:09:56.797766   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem
	I0707 16:09:56.797831   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/cert.pem (1123 bytes)
	I0707 16:09:56.797963   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:09:56.798005   32269 exec_runner.go:144] found /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem, removing ...
	I0707 16:09:56.798010   32269 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem
	I0707 16:09:56.798080   32269 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16845-29196/.minikube/key.pem (1675 bytes)
	I0707 16:09:56.798210   32269 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca-key.pem org=jenkins.multinode-136000-m02 san=[192.168.64.56 192.168.64.56 localhost 127.0.0.1 minikube multinode-136000-m02]
	I0707 16:09:56.873950   32269 provision.go:172] copyRemoteCerts
	I0707 16:09:56.874008   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0707 16:09:56.874025   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.874169   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:56.874261   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.874358   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:56.874448   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:56.917647   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0707 16:09:56.917716   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0707 16:09:56.933654   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0707 16:09:56.933710   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0707 16:09:56.949646   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0707 16:09:56.949702   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0707 16:09:56.965625   32269 provision.go:86] duration metric: configureAuth took 168.616012ms
	I0707 16:09:56.965635   32269 buildroot.go:189] setting minikube options for container-runtime
	I0707 16:09:56.965811   32269 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:09:56.965826   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:56.965954   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:56.966037   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:56.966131   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.966217   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:56.966294   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:56.966416   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:56.966705   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:56.966713   32269 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0707 16:09:57.042075   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0707 16:09:57.042092   32269 buildroot.go:70] root file system type: tmpfs
	I0707 16:09:57.042186   32269 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0707 16:09:57.042200   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.042343   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.042436   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.042512   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.042592   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.042728   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:57.043044   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:57.043091   32269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.64.55"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0707 16:09:57.127435   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.64.55
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0707 16:09:57.127452   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.127588   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.127700   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.127789   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.127877   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.128013   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:57.128319   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:57.128333   32269 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0707 16:09:57.690253   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0707 16:09:57.690267   32269 machine.go:91] provisioned docker machine in 36.073333253s
	I0707 16:09:57.690274   32269 start.go:300] post-start starting for "multinode-136000-m02" (driver="hyperkit")
	I0707 16:09:57.690281   32269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0707 16:09:57.690314   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.690500   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0707 16:09:57.690520   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.690613   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.690697   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.690781   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.690857   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:57.734283   32269 ssh_runner.go:195] Run: cat /etc/os-release
	I0707 16:09:57.736846   32269 command_runner.go:130] > NAME=Buildroot
	I0707 16:09:57.736859   32269 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0707 16:09:57.736863   32269 command_runner.go:130] > ID=buildroot
	I0707 16:09:57.736868   32269 command_runner.go:130] > VERSION_ID=2021.02.12
	I0707 16:09:57.736885   32269 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0707 16:09:57.736979   32269 info.go:137] Remote host: Buildroot 2021.02.12
	I0707 16:09:57.736989   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/addons for local assets ...
	I0707 16:09:57.737071   32269 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16845-29196/.minikube/files for local assets ...
	I0707 16:09:57.737245   32269 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> 296432.pem in /etc/ssl/certs
	I0707 16:09:57.737250   32269 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem -> /etc/ssl/certs/296432.pem
	I0707 16:09:57.737432   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0707 16:09:57.743102   32269 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/ssl/certs/296432.pem --> /etc/ssl/certs/296432.pem (1708 bytes)
	I0707 16:09:57.759197   32269 start.go:303] post-start completed in 68.913748ms
	I0707 16:09:57.759208   32269 fix.go:56] fixHost completed within 36.253597016s
	I0707 16:09:57.759222   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.759352   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.759474   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.759564   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.759651   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.759766   32269 main.go:141] libmachine: Using SSH client type: native
	I0707 16:09:57.760064   32269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 192.168.64.56 22 <nil> <nil>}
	I0707 16:09:57.760073   32269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0707 16:09:57.834660   32269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688771397.920614680
	
	I0707 16:09:57.834672   32269 fix.go:206] guest clock: 1688771397.920614680
	I0707 16:09:57.834677   32269 fix.go:219] Guest: 2023-07-07 16:09:57.92061468 -0700 PDT Remote: 2023-07-07 16:09:57.759213 -0700 PDT m=+89.602198557 (delta=161.40168ms)
	I0707 16:09:57.834687   32269 fix.go:190] guest clock delta is within tolerance: 161.40168ms
	I0707 16:09:57.834691   32269 start.go:83] releasing machines lock for "multinode-136000-m02", held for 36.32909835s
	I0707 16:09:57.834715   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.834848   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetIP
	I0707 16:09:57.858213   32269 out.go:177] * Found network options:
	I0707 16:09:57.880446   32269 out.go:177]   - NO_PROXY=192.168.64.55
	W0707 16:09:57.902337   32269 proxy.go:119] fail to check proxy env: Error ip not in block
	I0707 16:09:57.902382   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.903199   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.903460   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:09:57.903625   32269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0707 16:09:57.903663   32269 proxy.go:119] fail to check proxy env: Error ip not in block
	I0707 16:09:57.903688   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.903817   32269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0707 16:09:57.903847   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:09:57.903920   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.904056   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:09:57.904124   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.904245   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:09:57.904261   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.904378   32269 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:09:57.904402   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:57.904500   32269 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:09:57.945647   32269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0707 16:09:57.945792   32269 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0707 16:09:57.945862   32269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0707 16:09:57.989041   32269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0707 16:09:57.989094   32269 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0707 16:09:57.989120   32269 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0707 16:09:57.989133   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:09:57.989247   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:09:58.002396   32269 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0707 16:09:58.002465   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0707 16:09:58.009560   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0707 16:09:58.016560   32269 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0707 16:09:58.016607   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0707 16:09:58.023558   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:09:58.030474   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0707 16:09:58.037312   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0707 16:09:58.044416   32269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0707 16:09:58.051665   32269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0707 16:09:58.058551   32269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0707 16:09:58.064706   32269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0707 16:09:58.064873   32269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0707 16:09:58.071105   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:09:58.165592   32269 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0707 16:09:58.177684   32269 start.go:466] detecting cgroup driver to use...
	I0707 16:09:58.177751   32269 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0707 16:09:58.186600   32269 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0707 16:09:58.187212   32269 command_runner.go:130] > [Unit]
	I0707 16:09:58.187240   32269 command_runner.go:130] > Description=Docker Application Container Engine
	I0707 16:09:58.187245   32269 command_runner.go:130] > Documentation=https://docs.docker.com
	I0707 16:09:58.187252   32269 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0707 16:09:58.187259   32269 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0707 16:09:58.187267   32269 command_runner.go:130] > StartLimitBurst=3
	I0707 16:09:58.187271   32269 command_runner.go:130] > StartLimitIntervalSec=60
	I0707 16:09:58.187275   32269 command_runner.go:130] > [Service]
	I0707 16:09:58.187321   32269 command_runner.go:130] > Type=notify
	I0707 16:09:58.187326   32269 command_runner.go:130] > Restart=on-failure
	I0707 16:09:58.187330   32269 command_runner.go:130] > Environment=NO_PROXY=192.168.64.55
	I0707 16:09:58.187336   32269 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0707 16:09:58.187348   32269 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0707 16:09:58.187368   32269 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0707 16:09:58.187394   32269 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0707 16:09:58.187400   32269 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0707 16:09:58.187407   32269 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0707 16:09:58.187414   32269 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0707 16:09:58.187425   32269 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0707 16:09:58.187431   32269 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0707 16:09:58.187434   32269 command_runner.go:130] > ExecStart=
	I0707 16:09:58.187446   32269 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0707 16:09:58.187452   32269 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0707 16:09:58.187459   32269 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0707 16:09:58.187465   32269 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0707 16:09:58.187490   32269 command_runner.go:130] > LimitNOFILE=infinity
	I0707 16:09:58.187516   32269 command_runner.go:130] > LimitNPROC=infinity
	I0707 16:09:58.187521   32269 command_runner.go:130] > LimitCORE=infinity
	I0707 16:09:58.187529   32269 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0707 16:09:58.187536   32269 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0707 16:09:58.187541   32269 command_runner.go:130] > TasksMax=infinity
	I0707 16:09:58.187561   32269 command_runner.go:130] > TimeoutStartSec=0
	I0707 16:09:58.187589   32269 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0707 16:09:58.187595   32269 command_runner.go:130] > Delegate=yes
	I0707 16:09:58.187601   32269 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0707 16:09:58.187608   32269 command_runner.go:130] > KillMode=process
	I0707 16:09:58.187613   32269 command_runner.go:130] > [Install]
	I0707 16:09:58.187616   32269 command_runner.go:130] > WantedBy=multi-user.target
	I0707 16:09:58.187820   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:09:58.198397   32269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0707 16:09:58.229497   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0707 16:09:58.238537   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:09:58.247320   32269 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0707 16:09:58.268907   32269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0707 16:09:58.278123   32269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0707 16:09:58.290354   32269 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0707 16:09:58.290771   32269 ssh_runner.go:195] Run: which cri-dockerd
	I0707 16:09:58.292879   32269 command_runner.go:130] > /usr/bin/cri-dockerd
	I0707 16:09:58.293077   32269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0707 16:09:58.299024   32269 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0707 16:09:58.309756   32269 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0707 16:09:58.389655   32269 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0707 16:09:58.477748   32269 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0707 16:09:58.477764   32269 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0707 16:09:58.489100   32269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0707 16:09:58.577474   32269 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0707 16:10:59.622276   32269 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0707 16:10:59.622290   32269 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0707 16:10:59.622326   32269 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.043496052s)
	I0707 16:10:59.644709   32269 out.go:177] 
	W0707 16:10:59.665571   32269 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0707 16:10:59.665610   32269 out.go:239] * 
	W0707 16:10:59.666822   32269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0707 16:10:59.710532   32269 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-07-07 23:08:36 UTC, ends at Fri 2023-07-07 23:11:00 UTC. --
	Jul 07 23:09:17 multinode-136000 dockerd[865]: time="2023-07-07T23:09:17.549800448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:17 multinode-136000 dockerd[865]: time="2023-07-07T23:09:17.549830142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:09:17 multinode-136000 dockerd[865]: time="2023-07-07T23:09:17.549846291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:17 multinode-136000 dockerd[865]: time="2023-07-07T23:09:17.869211409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:09:17 multinode-136000 dockerd[865]: time="2023-07-07T23:09:17.869258134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:17 multinode-136000 dockerd[865]: time="2023-07-07T23:09:17.869279841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:09:17 multinode-136000 dockerd[865]: time="2023-07-07T23:09:17.869289096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:17 multinode-136000 cri-dockerd[1094]: time="2023-07-07T23:09:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d226d3904c7c8cfea3911c32eccdd884fe447a9946f03437242bb6be0bcc3ca3/resolv.conf as [nameserver 192.168.64.1]"
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.002831576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.002919225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.002939139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.002947851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:18 multinode-136000 cri-dockerd[1094]: time="2023-07-07T23:09:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06c8fde8a8f6aad4866434727d7da1fdf75e21e8527c1011bf014da051f57fba/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.390104439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.390270157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.390288468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:09:18 multinode-136000 dockerd[865]: time="2023-07-07T23:09:18.390299281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:32 multinode-136000 dockerd[859]: time="2023-07-07T23:09:32.602183533Z" level=info msg="ignoring event" container=bbaccdaa2e9395735f07378a8789b3f62c48921c4782df12afe45bcc1b79179c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 07 23:09:32 multinode-136000 dockerd[865]: time="2023-07-07T23:09:32.603418044Z" level=info msg="shim disconnected" id=bbaccdaa2e9395735f07378a8789b3f62c48921c4782df12afe45bcc1b79179c namespace=moby
	Jul 07 23:09:32 multinode-136000 dockerd[865]: time="2023-07-07T23:09:32.603707626Z" level=warning msg="cleaning up after shim disconnected" id=bbaccdaa2e9395735f07378a8789b3f62c48921c4782df12afe45bcc1b79179c namespace=moby
	Jul 07 23:09:32 multinode-136000 dockerd[865]: time="2023-07-07T23:09:32.603751155Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 07 23:09:45 multinode-136000 dockerd[865]: time="2023-07-07T23:09:45.612930635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 07 23:09:45 multinode-136000 dockerd[865]: time="2023-07-07T23:09:45.613002871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 07 23:09:45 multinode-136000 dockerd[865]: time="2023-07-07T23:09:45.613044834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 07 23:09:45 multinode-136000 dockerd[865]: time="2023-07-07T23:09:45.613059989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	192eb55487bfc       6e38f40d628db       About a minute ago   Running             storage-provisioner       4                   353fc65199678
	ba8df4ac5b491       8c811b4aec35f       About a minute ago   Running             busybox                   2                   06c8fde8a8f6a
	a52fdbd2e7fd4       ead0a4a53df89       About a minute ago   Running             coredns                   2                   d226d3904c7c8
	72cf41cf2f02d       b0b1fa0f58c6e       About a minute ago   Running             kindnet-cni               2                   fd2951ad9cc22
	974238f9eec7e       5780543258cf0       About a minute ago   Running             kube-proxy                2                   4d89ddc18deca
	bbaccdaa2e939       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   353fc65199678
	c6fc0d8fe7332       41697ceeb70b3       2 minutes ago        Running             kube-scheduler            2                   9c986e3b6ac65
	cd21f109b5b1f       86b6af7dd652c       2 minutes ago        Running             etcd                      2                   0314b26b3f2a2
	49f6caa448e59       7cffc01dba0e1       2 minutes ago        Running             kube-controller-manager   2                   d9dd887ea6f92
	8cd753367a73d       08a0c939e61b7       2 minutes ago        Running             kube-apiserver            2                   3c2798d588e3e
	81465d000686c       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   c6aa21e5c3e56
	5446c9eb3ec81       ead0a4a53df89       4 minutes ago        Exited              coredns                   1                   3b27f9dc5b006
	b1b16ce0e1c2f       b0b1fa0f58c6e       5 minutes ago        Exited              kindnet-cni               1                   2f325ef45b4f2
	df2ce2928fd17       5780543258cf0       5 minutes ago        Exited              kube-proxy                1                   76e1078f77285
	de3cae1acc39f       41697ceeb70b3       5 minutes ago        Exited              kube-scheduler            1                   1cd6ba5096875
	b2c1151ec6631       86b6af7dd652c       5 minutes ago        Exited              etcd                      1                   ef7a96b917fd6
	50f3c898eb77e       08a0c939e61b7       5 minutes ago        Exited              kube-apiserver            1                   d462026e5304b
	317ce02a7796a       7cffc01dba0e1       5 minutes ago        Exited              kube-controller-manager   1                   9278b14b49d4c
	
	* 
	* ==> coredns [5446c9eb3ec8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46210 - 49301 "HINFO IN 312116599509882877.480521291911041484. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.005178943s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [a52fdbd2e7fd] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48140 - 49759 "HINFO IN 2319291222755232200.1500276195469922182. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004326524s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-136000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-136000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3dd06f17c6a1b64a4b1936ddf0915ac0c80e3794
	                    minikube.k8s.io/name=multinode-136000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_07T16_02_29_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 07 Jul 2023 23:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-136000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 07 Jul 2023 23:10:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 07 Jul 2023 23:09:08 +0000   Fri, 07 Jul 2023 23:02:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 07 Jul 2023 23:09:08 +0000   Fri, 07 Jul 2023 23:02:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 07 Jul 2023 23:09:08 +0000   Fri, 07 Jul 2023 23:02:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 07 Jul 2023 23:09:08 +0000   Fri, 07 Jul 2023 23:09:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.55
	  Hostname:    multinode-136000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 688b96d09d10443ba6d8c99f6994bb09
	  System UUID:                442911ee-0000-0000-8196-149d997f80ea
	  Boot ID:                    4cba8606-7e2e-4867-bb65-1ee64a404f7c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-jbj7z                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 coredns-5d78c9869d-78qmb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m20s
	  kube-system                 etcd-multinode-136000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m34s
	  kube-system                 kindnet-h8rpq                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m21s
	  kube-system                 kube-apiserver-multinode-136000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-controller-manager-multinode-136000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-proxy-wd4p8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-scheduler-multinode-136000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m19s                  kube-proxy       
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m33s                  kubelet          Node multinode-136000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m33s                  kubelet          Node multinode-136000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s                  kubelet          Node multinode-136000 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m33s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m21s                  node-controller  Node multinode-136000 event: Registered Node multinode-136000 in Controller
	  Normal  NodeReady                8m12s                  kubelet          Node multinode-136000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node multinode-136000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node multinode-136000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node multinode-136000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                   node-controller  Node multinode-136000 event: Registered Node multinode-136000 in Controller
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)    kubelet          Node multinode-136000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)    kubelet          Node multinode-136000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)    kubelet          Node multinode-136000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                   node-controller  Node multinode-136000 event: Registered Node multinode-136000 in Controller
	
	
	Name:               multinode-136000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-136000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 07 Jul 2023 23:06:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-136000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 07 Jul 2023 23:08:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 07 Jul 2023 23:07:00 +0000   Fri, 07 Jul 2023 23:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 07 Jul 2023 23:07:00 +0000   Fri, 07 Jul 2023 23:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 07 Jul 2023 23:07:00 +0000   Fri, 07 Jul 2023 23:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 07 Jul 2023 23:07:00 +0000   Fri, 07 Jul 2023 23:09:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.64.56
	  Hostname:    multinode-136000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 92e2d4d4bf3445deaa3cbd52f9aa6c99
	  System UUID:                671811ee-0000-0000-8196-149d997f80ea
	  Boot ID:                    25561825-f702-4b57-b6d4-e60b4418dda6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-6mm4t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-gj2vg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m44s
	  kube-system                 kube-proxy-dvrg9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 7m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m44s (x2 over 7m44s)  kubelet          Node multinode-136000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m44s (x2 over 7m44s)  kubelet          Node multinode-136000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m44s (x2 over 7m44s)  kubelet          Node multinode-136000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m27s                  kubelet          Node multinode-136000-m02 status is now: NodeReady
	  Normal  Starting                 4m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x2 over 4m7s)    kubelet          Node multinode-136000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x2 over 4m7s)    kubelet          Node multinode-136000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x2 over 4m7s)    kubelet          Node multinode-136000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m1s                   kubelet          Node multinode-136000-m02 status is now: NodeReady
	  Normal  RegisteredNode           108s                   node-controller  Node multinode-136000-m02 event: Registered Node multinode-136000-m02 in Controller
	  Normal  NodeNotReady             68s                    node-controller  Node multinode-136000-m02 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.028265] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +4.945053] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.008118] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.261091] systemd-fstab-generator[124]: Ignoring "noauto" for root device
	[  +0.040069] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.864007] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.137069] systemd-fstab-generator[515]: Ignoring "noauto" for root device
	[  +0.090405] systemd-fstab-generator[526]: Ignoring "noauto" for root device
	[  +0.862966] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.231173] systemd-fstab-generator[825]: Ignoring "noauto" for root device
	[  +0.099444] systemd-fstab-generator[836]: Ignoring "noauto" for root device
	[  +0.098052] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +1.235513] kauditd_printk_skb: 30 callbacks suppressed
	[  +0.184255] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.087476] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[  +0.087285] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +0.093716] systemd-fstab-generator[1041]: Ignoring "noauto" for root device
	[  +0.100787] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[ +11.477405] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
	[  +0.276534] kauditd_printk_skb: 29 callbacks suppressed
	[Jul 7 23:09] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [b2c1151ec663] <==
	* {"level":"info","ts":"2023-07-07T23:05:45.558Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"1ea7c4ec1c186768","initial-advertise-peer-urls":["https://192.168.64.55:2380"],"listen-peer-urls":["https://192.168.64.55:2380"],"advertise-client-urls":["https://192.168.64.55:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.55:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-07T23:05:45.558Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-07T23:05:46.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-07T23:05:46.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-07T23:05:46.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 received MsgPreVoteResp from 1ea7c4ec1c186768 at term 2"}
	{"level":"info","ts":"2023-07-07T23:05:46.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 became candidate at term 3"}
	{"level":"info","ts":"2023-07-07T23:05:46.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 received MsgVoteResp from 1ea7c4ec1c186768 at term 3"}
	{"level":"info","ts":"2023-07-07T23:05:46.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 became leader at term 3"}
	{"level":"info","ts":"2023-07-07T23:05:46.836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1ea7c4ec1c186768 elected leader 1ea7c4ec1c186768 at term 3"}
	{"level":"info","ts":"2023-07-07T23:05:46.838Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1ea7c4ec1c186768","local-member-attributes":"{Name:multinode-136000 ClientURLs:[https://192.168.64.55:2379]}","request-path":"/0/members/1ea7c4ec1c186768/attributes","cluster-id":"20452ecec409ac90","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-07T23:05:46.838Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-07T23:05:46.839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-07T23:05:46.839Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-07T23:05:46.838Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-07T23:05:46.840Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.55:2379"}
	{"level":"info","ts":"2023-07-07T23:05:46.840Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-07T23:06:41.130Z","caller":"traceutil/trace.go:171","msg":"trace[1794036385] transaction","detail":"{read_only:false; response_revision:882; number_of_response:1; }","duration":"109.620264ms","start":"2023-07-07T23:06:41.021Z","end":"2023-07-07T23:06:41.130Z","steps":["trace[1794036385] 'process raft request'  (duration: 109.58281ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-07T23:06:41.131Z","caller":"traceutil/trace.go:171","msg":"trace[688682531] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"119.439502ms","start":"2023-07-07T23:06:41.011Z","end":"2023-07-07T23:06:41.131Z","steps":["trace[688682531] 'process raft request'  (duration: 118.97299ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-07T23:06:41.131Z","caller":"traceutil/trace.go:171","msg":"trace[1872480685] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"119.706791ms","start":"2023-07-07T23:06:41.011Z","end":"2023-07-07T23:06:41.131Z","steps":["trace[1872480685] 'process raft request'  (duration: 79.23835ms)","trace[1872480685] 'compare'  (duration: 39.63756ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-07T23:08:12.132Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-07T23:08:12.133Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-136000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.55:2380"],"advertise-client-urls":["https://192.168.64.55:2379"]}
	{"level":"info","ts":"2023-07-07T23:08:12.144Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1ea7c4ec1c186768","current-leader-member-id":"1ea7c4ec1c186768"}
	{"level":"info","ts":"2023-07-07T23:08:12.145Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.64.55:2380"}
	{"level":"info","ts":"2023-07-07T23:08:12.147Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.64.55:2380"}
	{"level":"info","ts":"2023-07-07T23:08:12.147Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-136000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.55:2380"],"advertise-client-urls":["https://192.168.64.55:2379"]}
	
	* 
	* ==> etcd [cd21f109b5b1] <==
	* {"level":"info","ts":"2023-07-07T23:08:58.575Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-07T23:08:58.575Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-07T23:08:58.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 switched to configuration voters=(2208950660611204968)"}
	{"level":"info","ts":"2023-07-07T23:08:58.576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"20452ecec409ac90","local-member-id":"1ea7c4ec1c186768","added-peer-id":"1ea7c4ec1c186768","added-peer-peer-urls":["https://192.168.64.55:2380"]}
	{"level":"info","ts":"2023-07-07T23:08:58.576Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"20452ecec409ac90","local-member-id":"1ea7c4ec1c186768","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-07T23:08:58.576Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-07T23:08:58.583Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-07T23:08:58.586Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"1ea7c4ec1c186768","initial-advertise-peer-urls":["https://192.168.64.55:2380"],"listen-peer-urls":["https://192.168.64.55:2380"],"advertise-client-urls":["https://192.168.64.55:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.55:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-07T23:08:58.586Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-07T23:08:58.588Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.55:2380"}
	{"level":"info","ts":"2023-07-07T23:08:58.588Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.55:2380"}
	{"level":"info","ts":"2023-07-07T23:08:59.632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-07T23:08:59.632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-07T23:08:59.632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 received MsgPreVoteResp from 1ea7c4ec1c186768 at term 3"}
	{"level":"info","ts":"2023-07-07T23:08:59.632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 became candidate at term 4"}
	{"level":"info","ts":"2023-07-07T23:08:59.632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 received MsgVoteResp from 1ea7c4ec1c186768 at term 4"}
	{"level":"info","ts":"2023-07-07T23:08:59.632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ea7c4ec1c186768 became leader at term 4"}
	{"level":"info","ts":"2023-07-07T23:08:59.632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1ea7c4ec1c186768 elected leader 1ea7c4ec1c186768 at term 4"}
	{"level":"info","ts":"2023-07-07T23:08:59.635Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-07T23:08:59.635Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-07T23:08:59.636Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-07T23:08:59.636Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-07T23:08:59.636Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-07T23:08:59.637Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.55:2379"}
	{"level":"info","ts":"2023-07-07T23:08:59.635Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1ea7c4ec1c186768","local-member-attributes":"{Name:multinode-136000 ClientURLs:[https://192.168.64.55:2379]}","request-path":"/0/members/1ea7c4ec1c186768/attributes","cluster-id":"20452ecec409ac90","publish-timeout":"7s"}
	
	* 
	* ==> kernel <==
	*  23:11:01 up 2 min,  0 users,  load average: 0.20, 0.18, 0.08
	Linux multinode-136000 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [72cf41cf2f02] <==
	* I0707 23:09:55.564083       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:10:05.567169       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:10:05.567202       1 main.go:227] handling current node
	I0707 23:10:05.567210       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:10:05.567214       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:10:15.570551       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:10:15.570566       1 main.go:227] handling current node
	I0707 23:10:15.570573       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:10:15.570576       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:10:25.579646       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:10:25.579681       1 main.go:227] handling current node
	I0707 23:10:25.579690       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:10:25.579694       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:10:35.596854       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:10:35.596873       1 main.go:227] handling current node
	I0707 23:10:35.596881       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:10:35.596887       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:10:45.601302       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:10:45.601317       1 main.go:227] handling current node
	I0707 23:10:45.601323       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:10:45.601327       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:10:55.616457       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:10:55.616503       1 main.go:227] handling current node
	I0707 23:10:55.616522       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:10:55.616529       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kindnet [b1b16ce0e1c2] <==
	* I0707 23:07:32.311460       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:07:32.311626       1 main.go:227] handling current node
	I0707 23:07:32.311696       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:07:32.311767       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:07:32.311912       1 main.go:223] Handling node with IPs: map[192.168.64.57:{}]
	I0707 23:07:32.311993       1 main.go:250] Node multinode-136000-m03 has CIDR [10.244.3.0/24] 
	I0707 23:07:42.325387       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:07:42.325402       1 main.go:227] handling current node
	I0707 23:07:42.325408       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:07:42.325412       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:07:42.325495       1 main.go:223] Handling node with IPs: map[192.168.64.57:{}]
	I0707 23:07:42.325501       1 main.go:250] Node multinode-136000-m03 has CIDR [10.244.3.0/24] 
	I0707 23:07:52.332223       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:07:52.332257       1 main.go:227] handling current node
	I0707 23:07:52.332273       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:07:52.332280       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:07:52.332347       1 main.go:223] Handling node with IPs: map[192.168.64.57:{}]
	I0707 23:07:52.332410       1 main.go:250] Node multinode-136000-m03 has CIDR [10.244.2.0/24] 
	I0707 23:07:52.332471       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.64.57 Flags: [] Table: 0} 
	I0707 23:08:02.345212       1 main.go:223] Handling node with IPs: map[192.168.64.55:{}]
	I0707 23:08:02.345247       1 main.go:227] handling current node
	I0707 23:08:02.345255       1 main.go:223] Handling node with IPs: map[192.168.64.56:{}]
	I0707 23:08:02.345259       1 main.go:250] Node multinode-136000-m02 has CIDR [10.244.1.0/24] 
	I0707 23:08:02.345437       1 main.go:223] Handling node with IPs: map[192.168.64.57:{}]
	I0707 23:08:02.345464       1 main.go:250] Node multinode-136000-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [50f3c898eb77] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0707 23:08:12.138925       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0707 23:08:12.138952       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0707 23:08:12.138971       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [8cd753367a73] <==
	* I0707 23:09:00.667979       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0707 23:09:00.668425       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0707 23:09:00.668503       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0707 23:09:00.760240       1 shared_informer.go:318] Caches are synced for configmaps
	I0707 23:09:00.794924       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0707 23:09:00.797318       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0707 23:09:00.855532       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0707 23:09:00.855817       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0707 23:09:00.859404       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0707 23:09:00.859986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0707 23:09:00.861640       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0707 23:09:00.868629       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0707 23:09:00.868774       1 aggregator.go:152] initial CRD sync complete...
	I0707 23:09:00.868802       1 autoregister_controller.go:141] Starting autoregister controller
	I0707 23:09:00.868807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0707 23:09:00.868812       1 cache.go:39] Caches are synced for autoregister controller
	I0707 23:09:01.415743       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0707 23:09:01.664963       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0707 23:09:03.269471       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0707 23:09:03.383062       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0707 23:09:03.389396       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0707 23:09:03.429270       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0707 23:09:03.433884       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0707 23:09:13.774219       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0707 23:09:14.121657       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [317ce02a7796] <==
	* I0707 23:06:01.517973       1 shared_informer.go:318] Caches are synced for garbage collector
	I0707 23:06:01.518010       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	W0707 23:06:08.804683       1 topologycache.go:232] Can't get CPU or zone information for multinode-136000-m02 node
	I0707 23:06:10.910813       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-jbj7z" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-jbj7z"
	I0707 23:06:10.910844       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d-78qmb" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5d78c9869d-78qmb"
	I0707 23:06:10.910851       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0707 23:06:40.927546       1 event.go:307] "Event occurred" object="multinode-136000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-136000-m02 status is now: NodeNotReady"
	I0707 23:06:40.927626       1 event.go:307] "Event occurred" object="multinode-136000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-136000-m03 status is now: NodeNotReady"
	I0707 23:06:40.947621       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-5865g" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:06:40.950611       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-pgvw7" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:06:40.986391       1 event.go:307] "Event occurred" object="kube-system/kindnet-zpx7k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:06:40.994602       1 event.go:307] "Event occurred" object="kube-system/kindnet-gj2vg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:06:41.133611       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-dvrg9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:06:51.225331       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-6mm4t"
	I0707 23:06:54.900917       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-136000-m02\" does not exist"
	I0707 23:06:54.907195       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-136000-m02" podCIDRs=[10.244.1.0/24]
	W0707 23:07:00.573589       1 topologycache.go:232] Can't get CPU or zone information for multinode-136000-m02 node
	I0707 23:07:01.146935       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-pgvw7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-pgvw7"
	W0707 23:07:42.357160       1 topologycache.go:232] Can't get CPU or zone information for multinode-136000-m02 node
	I0707 23:07:42.915716       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-136000-m03\" does not exist"
	W0707 23:07:42.915892       1 topologycache.go:232] Can't get CPU or zone information for multinode-136000-m02 node
	I0707 23:07:42.923297       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-136000-m03" podCIDRs=[10.244.2.0/24]
	W0707 23:08:06.560200       1 topologycache.go:232] Can't get CPU or zone information for multinode-136000-m03 node
	W0707 23:08:09.118442       1 topologycache.go:232] Can't get CPU or zone information for multinode-136000-m02 node
	I0707 23:08:11.160934       1 event.go:307] "Event occurred" object="multinode-136000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-136000-m03 event: Removing Node multinode-136000-m03 from Controller"
	
	* 
	* ==> kube-controller-manager [49f6caa448e5] <==
	* I0707 23:09:13.735481       1 shared_informer.go:318] Caches are synced for taint
	I0707 23:09:13.736242       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0707 23:09:13.736732       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-136000"
	I0707 23:09:13.736969       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-136000-m02"
	I0707 23:09:13.737931       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0707 23:09:13.738040       1 event.go:307] "Event occurred" object="multinode-136000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-136000 event: Registered Node multinode-136000 in Controller"
	I0707 23:09:13.738153       1 event.go:307] "Event occurred" object="multinode-136000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-136000-m02 event: Registered Node multinode-136000-m02 in Controller"
	I0707 23:09:13.738333       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0707 23:09:13.738615       1 taint_manager.go:211] "Sending events to api server"
	I0707 23:09:13.747305       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0707 23:09:13.801457       1 shared_informer.go:318] Caches are synced for stateful set
	I0707 23:09:13.833694       1 shared_informer.go:318] Caches are synced for resource quota
	I0707 23:09:13.867804       1 shared_informer.go:318] Caches are synced for disruption
	I0707 23:09:13.927010       1 shared_informer.go:318] Caches are synced for resource quota
	I0707 23:09:14.229147       1 shared_informer.go:318] Caches are synced for garbage collector
	I0707 23:09:14.229364       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0707 23:09:14.258992       1 shared_informer.go:318] Caches are synced for garbage collector
	I0707 23:09:53.752432       1 event.go:307] "Event occurred" object="multinode-136000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-136000-m02 status is now: NodeNotReady"
	I0707 23:09:53.758183       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-6mm4t" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:09:53.768143       1 event.go:307] "Event occurred" object="kube-system/kindnet-gj2vg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:09:53.778135       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-dvrg9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0707 23:10:13.700388       1 gc_controller.go:337] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-5865g"
	I0707 23:10:13.712658       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-5865g"
	I0707 23:10:13.712793       1 gc_controller.go:337] "PodGC is force deleting Pod" pod="kube-system/kindnet-zpx7k"
	I0707 23:10:13.722682       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-zpx7k"
	
	* 
	* ==> kube-proxy [974238f9eec7] <==
	* I0707 23:09:02.780477       1 node.go:141] Successfully retrieved node IP: 192.168.64.55
	I0707 23:09:02.780599       1 server_others.go:110] "Detected node IP" address="192.168.64.55"
	I0707 23:09:02.780732       1 server_others.go:554] "Using iptables proxy"
	I0707 23:09:02.871710       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0707 23:09:02.871748       1 server_others.go:192] "Using iptables Proxier"
	I0707 23:09:02.871970       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0707 23:09:02.873001       1 server.go:658] "Version info" version="v1.27.3"
	I0707 23:09:02.873032       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0707 23:09:02.874620       1 config.go:188] "Starting service config controller"
	I0707 23:09:02.874962       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0707 23:09:02.875194       1 config.go:97] "Starting endpoint slice config controller"
	I0707 23:09:02.875223       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0707 23:09:02.876244       1 config.go:315] "Starting node config controller"
	I0707 23:09:02.876278       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0707 23:09:02.977213       1 shared_informer.go:318] Caches are synced for node config
	I0707 23:09:02.977249       1 shared_informer.go:318] Caches are synced for service config
	I0707 23:09:02.977266       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [df2ce2928fd1] <==
	* I0707 23:05:49.264646       1 node.go:141] Successfully retrieved node IP: 192.168.64.55
	I0707 23:05:49.264820       1 server_others.go:110] "Detected node IP" address="192.168.64.55"
	I0707 23:05:49.264878       1 server_others.go:554] "Using iptables proxy"
	I0707 23:05:49.341571       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0707 23:05:49.341647       1 server_others.go:192] "Using iptables Proxier"
	I0707 23:05:49.342459       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0707 23:05:49.345132       1 server.go:658] "Version info" version="v1.27.3"
	I0707 23:05:49.345145       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0707 23:05:49.346620       1 config.go:315] "Starting node config controller"
	I0707 23:05:49.347020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0707 23:05:49.349526       1 config.go:188] "Starting service config controller"
	I0707 23:05:49.349579       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0707 23:05:49.349594       1 config.go:97] "Starting endpoint slice config controller"
	I0707 23:05:49.349597       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0707 23:05:49.451728       1 shared_informer.go:318] Caches are synced for node config
	I0707 23:05:49.451743       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0707 23:05:49.451751       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [c6fc0d8fe733] <==
	* I0707 23:08:58.884917       1 serving.go:348] Generated self-signed cert in-memory
	W0707 23:09:00.712140       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0707 23:09:00.712261       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0707 23:09:00.712308       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0707 23:09:00.712394       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0707 23:09:00.762560       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0707 23:09:00.762616       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0707 23:09:00.766489       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0707 23:09:00.774592       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0707 23:09:00.774623       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0707 23:09:00.774644       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0707 23:09:00.817362       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0707 23:09:00.819452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0707 23:09:00.819496       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0707 23:09:00.819506       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0707 23:09:00.820981       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0707 23:09:00.821014       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0707 23:09:00.875186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [de3cae1acc39] <==
	* I0707 23:05:46.366789       1 serving.go:348] Generated self-signed cert in-memory
	W0707 23:05:47.940408       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0707 23:05:47.940567       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0707 23:05:47.940592       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0707 23:05:47.940605       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0707 23:05:47.965011       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0707 23:05:47.965157       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0707 23:05:47.966588       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0707 23:05:47.967148       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0707 23:05:47.967248       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0707 23:05:47.967480       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0707 23:05:48.068428       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0707 23:08:12.115468       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0707 23:08:12.115565       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0707 23:08:12.115684       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0707 23:08:12.115751       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-07-07 23:08:36 UTC, ends at Fri 2023-07-07 23:11:02 UTC. --
	Jul 07 23:09:05 multinode-136000 kubelet[1328]: E0707 23:09:05.424070    1328 projected.go:198] Error preparing data for projected volume kube-api-access-l8rr7 for pod default/busybox-67b7f59bb-jbj7z: object "default"/"kube-root-ca.crt" not registered
	Jul 07 23:09:05 multinode-136000 kubelet[1328]: E0707 23:09:05.424104    1328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f197034d-62fc-435d-8053-3ef7c1ac4e29-kube-api-access-l8rr7 podName:f197034d-62fc-435d-8053-3ef7c1ac4e29 nodeName:}" failed. No retries permitted until 2023-07-07 23:09:09.424094685 +0000 UTC m=+14.112080777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8rr7" (UniqueName: "kubernetes.io/projected/f197034d-62fc-435d-8053-3ef7c1ac4e29-kube-api-access-l8rr7") pod "busybox-67b7f59bb-jbj7z" (UID: "f197034d-62fc-435d-8053-3ef7c1ac4e29") : object "default"/"kube-root-ca.crt" not registered
	Jul 07 23:09:06 multinode-136000 kubelet[1328]: E0707 23:09:06.559151    1328 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-78qmb" podUID=d9671f13-fa08-4161-b216-53f645b9a1c1
	Jul 07 23:09:06 multinode-136000 kubelet[1328]: E0707 23:09:06.559303    1328 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-67b7f59bb-jbj7z" podUID=f197034d-62fc-435d-8053-3ef7c1ac4e29
	Jul 07 23:09:08 multinode-136000 kubelet[1328]: E0707 23:09:08.558257    1328 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-67b7f59bb-jbj7z" podUID=f197034d-62fc-435d-8053-3ef7c1ac4e29
	Jul 07 23:09:08 multinode-136000 kubelet[1328]: E0707 23:09:08.560492    1328 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-78qmb" podUID=d9671f13-fa08-4161-b216-53f645b9a1c1
	Jul 07 23:09:08 multinode-136000 kubelet[1328]: I0707 23:09:08.809950    1328 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 07 23:09:09 multinode-136000 kubelet[1328]: E0707 23:09:09.356265    1328 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 07 23:09:09 multinode-136000 kubelet[1328]: E0707 23:09:09.356389    1328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9671f13-fa08-4161-b216-53f645b9a1c1-config-volume podName:d9671f13-fa08-4161-b216-53f645b9a1c1 nodeName:}" failed. No retries permitted until 2023-07-07 23:09:17.356371002 +0000 UTC m=+22.044357111 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d9671f13-fa08-4161-b216-53f645b9a1c1-config-volume") pod "coredns-5d78c9869d-78qmb" (UID: "d9671f13-fa08-4161-b216-53f645b9a1c1") : object "kube-system"/"coredns" not registered
	Jul 07 23:09:09 multinode-136000 kubelet[1328]: E0707 23:09:09.457888    1328 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 07 23:09:09 multinode-136000 kubelet[1328]: E0707 23:09:09.458111    1328 projected.go:198] Error preparing data for projected volume kube-api-access-l8rr7 for pod default/busybox-67b7f59bb-jbj7z: object "default"/"kube-root-ca.crt" not registered
	Jul 07 23:09:09 multinode-136000 kubelet[1328]: E0707 23:09:09.458310    1328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f197034d-62fc-435d-8053-3ef7c1ac4e29-kube-api-access-l8rr7 podName:f197034d-62fc-435d-8053-3ef7c1ac4e29 nodeName:}" failed. No retries permitted until 2023-07-07 23:09:17.458289636 +0000 UTC m=+22.146275742 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8rr7" (UniqueName: "kubernetes.io/projected/f197034d-62fc-435d-8053-3ef7c1ac4e29-kube-api-access-l8rr7") pod "busybox-67b7f59bb-jbj7z" (UID: "f197034d-62fc-435d-8053-3ef7c1ac4e29") : object "default"/"kube-root-ca.crt" not registered
	Jul 07 23:09:18 multinode-136000 kubelet[1328]: I0707 23:09:18.331212    1328 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c8fde8a8f6aad4866434727d7da1fdf75e21e8527c1011bf014da051f57fba"
	Jul 07 23:09:33 multinode-136000 kubelet[1328]: I0707 23:09:33.492488    1328 scope.go:115] "RemoveContainer" containerID="a518f066f2a862e9a51c23a8c93693e28c2f91d58b0c62f5f2e45e53b0332901"
	Jul 07 23:09:33 multinode-136000 kubelet[1328]: I0707 23:09:33.492773    1328 scope.go:115] "RemoveContainer" containerID="bbaccdaa2e9395735f07378a8789b3f62c48921c4782df12afe45bcc1b79179c"
	Jul 07 23:09:33 multinode-136000 kubelet[1328]: E0707 23:09:33.493149    1328 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e617383f-c16f-44a7-a1a4-a2813ecc84f2)\"" pod="kube-system/storage-provisioner" podUID=e617383f-c16f-44a7-a1a4-a2813ecc84f2
	Jul 07 23:09:45 multinode-136000 kubelet[1328]: I0707 23:09:45.559054    1328 scope.go:115] "RemoveContainer" containerID="bbaccdaa2e9395735f07378a8789b3f62c48921c4782df12afe45bcc1b79179c"
	Jul 07 23:09:55 multinode-136000 kubelet[1328]: E0707 23:09:55.574205    1328 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 07 23:09:55 multinode-136000 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 07 23:09:55 multinode-136000 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 07 23:09:55 multinode-136000 kubelet[1328]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 07 23:10:55 multinode-136000 kubelet[1328]: E0707 23:10:55.574787    1328 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 07 23:10:55 multinode-136000 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 07 23:10:55 multinode-136000 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 07 23:10:55 multinode-136000 kubelet[1328]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-136000 -n multinode-136000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-136000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (155.06s)

                                                
                                    

Test pass (296/317)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.15
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.27.3/json-events 13.09
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.4
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
19 TestBinaryMirror 0.99
20 TestOffline 57.36
22 TestAddons/Setup 209.9
24 TestAddons/parallel/Registry 15.67
25 TestAddons/parallel/Ingress 20.84
26 TestAddons/parallel/InspektorGadget 10.4
27 TestAddons/parallel/MetricsServer 5.4
28 TestAddons/parallel/HelmTiller 12.19
30 TestAddons/parallel/CSI 61.13
31 TestAddons/parallel/Headlamp 13.16
32 TestAddons/parallel/CloudSpanner 5.35
35 TestAddons/serial/GCPAuth/Namespaces 0.09
36 TestAddons/StoppedEnableDisable 5.7
37 TestCertOptions 41.5
38 TestCertExpiration 248.28
39 TestDockerFlags 49.06
40 TestForceSystemdFlag 42.66
41 TestForceSystemdEnv 39.53
43 TestHyperKitDriverInstallOrUpdate 6.5
46 TestErrorSpam/setup 35.33
47 TestErrorSpam/start 1.22
48 TestErrorSpam/status 0.44
49 TestErrorSpam/pause 1.25
50 TestErrorSpam/unpause 1.23
51 TestErrorSpam/stop 5.64
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 53.04
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 38.71
58 TestFunctional/serial/KubeContext 0.03
59 TestFunctional/serial/KubectlGetPods 0.06
62 TestFunctional/serial/CacheCmd/cache/add_remote 6.29
63 TestFunctional/serial/CacheCmd/cache/add_local 1.37
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
65 TestFunctional/serial/CacheCmd/cache/list 0.07
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.16
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
68 TestFunctional/serial/CacheCmd/cache/delete 0.13
69 TestFunctional/serial/MinikubeKubectlCmd 0.53
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
71 TestFunctional/serial/ExtraConfig 39.94
72 TestFunctional/serial/ComponentHealth 0.05
73 TestFunctional/serial/LogsCmd 2.79
74 TestFunctional/serial/LogsFileCmd 2.58
75 TestFunctional/serial/InvalidService 4.95
77 TestFunctional/parallel/ConfigCmd 0.42
78 TestFunctional/parallel/DashboardCmd 11.99
79 TestFunctional/parallel/DryRun 1.04
80 TestFunctional/parallel/InternationalLanguage 0.57
81 TestFunctional/parallel/StatusCmd 0.44
85 TestFunctional/parallel/ServiceCmdConnect 13.37
86 TestFunctional/parallel/AddonsCmd 0.27
87 TestFunctional/parallel/PersistentVolumeClaim 27.74
89 TestFunctional/parallel/SSHCmd 0.27
90 TestFunctional/parallel/CpCmd 0.6
91 TestFunctional/parallel/MySQL 26.4
92 TestFunctional/parallel/FileSync 0.15
93 TestFunctional/parallel/CertSync 0.88
97 TestFunctional/parallel/NodeLabels 0.05
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
101 TestFunctional/parallel/License 0.76
103 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.35
104 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
106 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.19
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
109 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
110 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
111 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
112 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
113 TestFunctional/parallel/ServiceCmd/DeployApp 7.11
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
115 TestFunctional/parallel/ProfileCmd/profile_list 0.26
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
117 TestFunctional/parallel/MountCmd/any-port 7.96
118 TestFunctional/parallel/ServiceCmd/List 0.37
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.24
121 TestFunctional/parallel/ServiceCmd/Format 0.24
122 TestFunctional/parallel/ServiceCmd/URL 0.3
123 TestFunctional/parallel/MountCmd/specific-port 2.28
124 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
125 TestFunctional/parallel/Version/short 0.11
126 TestFunctional/parallel/Version/components 0.44
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.15
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.14
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
131 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
132 TestFunctional/parallel/ImageCommands/Setup 3.26
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.44
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.43
135 TestFunctional/parallel/DockerEnv/bash 0.7
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.8
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.21
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.51
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.32
144 TestFunctional/delete_addon-resizer_images 0.13
145 TestFunctional/delete_my-image_image 0.05
146 TestFunctional/delete_minikube_cached_images 0.05
150 TestImageBuild/serial/Setup 39.89
151 TestImageBuild/serial/NormalBuild 2.27
152 TestImageBuild/serial/BuildWithBuildArg 0.66
153 TestImageBuild/serial/BuildWithDockerIgnore 0.21
154 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.2
157 TestIngressAddonLegacy/StartLegacyK8sCluster 82.73
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 19.73
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.51
161 TestIngressAddonLegacy/serial/ValidateIngressAddons 31.85
164 TestJSONOutput/start/Command 77.74
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.44
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.43
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 8.15
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.7
192 TestMainNoArgs 0.06
196 TestMountStart/serial/StartWithMountFirst 17.15
197 TestMountStart/serial/VerifyMountFirst 0.28
198 TestMountStart/serial/StartWithMountSecond 17.31
199 TestMountStart/serial/VerifyMountSecond 0.28
200 TestMountStart/serial/DeleteFirst 2.4
201 TestMountStart/serial/VerifyMountPostDelete 0.31
202 TestMountStart/serial/Stop 2.22
203 TestMountStart/serial/RestartStopped 41.55
204 TestMountStart/serial/VerifyMountPostStop 0.29
207 TestMultiNode/serial/FreshStart2Nodes 102.76
208 TestMultiNode/serial/DeployApp2Nodes 5.58
209 TestMultiNode/serial/PingHostFrom2Pods 0.83
210 TestMultiNode/serial/AddNode 37.37
211 TestMultiNode/serial/ProfileList 0.24
212 TestMultiNode/serial/CopyFile 4.79
213 TestMultiNode/serial/StopNode 2.65
214 TestMultiNode/serial/StartAfterStop 29.34
215 TestMultiNode/serial/RestartKeepsNodes 191.08
216 TestMultiNode/serial/DeleteNode 3
217 TestMultiNode/serial/StopMultiNode 16.45
219 TestMultiNode/serial/ValidateNameConflict 45.65
223 TestPreload 155.19
225 TestScheduledStopUnix 107.3
226 TestSkaffold 112.28
229 TestRunningBinaryUpgrade 175.58
231 TestKubernetesUpgrade 141.27
244 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.86
245 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.56
246 TestStoppedBinaryUpgrade/Setup 1.94
247 TestStoppedBinaryUpgrade/Upgrade 162.36
249 TestPause/serial/Start 61.97
250 TestStoppedBinaryUpgrade/MinikubeLogs 3.26
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.48
260 TestNoKubernetes/serial/StartWithK8s 38.89
261 TestPause/serial/SecondStartNoReconfiguration 46.12
262 TestNoKubernetes/serial/StartWithStopK8s 8.22
263 TestNoKubernetes/serial/Start 18.66
264 TestPause/serial/Pause 0.51
265 TestPause/serial/VerifyStatus 0.14
266 TestPause/serial/Unpause 0.49
267 TestPause/serial/PauseAgain 0.61
268 TestPause/serial/DeletePaused 5.26
269 TestPause/serial/VerifyDeletedResources 0.2
270 TestNetworkPlugins/group/auto/Start 51.76
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.11
272 TestNoKubernetes/serial/ProfileList 0.35
273 TestNoKubernetes/serial/Stop 8.24
274 TestNoKubernetes/serial/StartNoArgs 16.81
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.11
276 TestNetworkPlugins/group/kindnet/Start 58.57
277 TestNetworkPlugins/group/auto/KubeletFlags 0.13
278 TestNetworkPlugins/group/auto/NetCatPod 14.25
279 TestNetworkPlugins/group/auto/DNS 0.12
280 TestNetworkPlugins/group/auto/Localhost 0.1
281 TestNetworkPlugins/group/auto/HairPin 0.1
282 TestNetworkPlugins/group/calico/Start 70.56
283 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
284 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
285 TestNetworkPlugins/group/kindnet/NetCatPod 15.25
286 TestNetworkPlugins/group/kindnet/DNS 0.13
287 TestNetworkPlugins/group/kindnet/Localhost 0.13
288 TestNetworkPlugins/group/kindnet/HairPin 0.1
289 TestNetworkPlugins/group/custom-flannel/Start 58.97
290 TestNetworkPlugins/group/calico/ControllerPod 5.01
291 TestNetworkPlugins/group/calico/KubeletFlags 0.16
292 TestNetworkPlugins/group/calico/NetCatPod 12.28
293 TestNetworkPlugins/group/calico/DNS 0.12
294 TestNetworkPlugins/group/calico/Localhost 0.11
295 TestNetworkPlugins/group/calico/HairPin 0.11
296 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
297 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.26
298 TestNetworkPlugins/group/false/Start 53.76
299 TestNetworkPlugins/group/custom-flannel/DNS 0.13
300 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
301 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
302 TestNetworkPlugins/group/enable-default-cni/Start 52.52
303 TestNetworkPlugins/group/false/KubeletFlags 0.15
304 TestNetworkPlugins/group/false/NetCatPod 14.28
305 TestNetworkPlugins/group/false/DNS 0.17
306 TestNetworkPlugins/group/false/Localhost 0.1
307 TestNetworkPlugins/group/false/HairPin 0.11
308 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.14
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.26
310 TestNetworkPlugins/group/flannel/Start 58.95
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
314 TestNetworkPlugins/group/bridge/Start 60.88
315 TestNetworkPlugins/group/flannel/ControllerPod 5.01
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
317 TestNetworkPlugins/group/flannel/NetCatPod 15.26
318 TestNetworkPlugins/group/flannel/DNS 0.13
319 TestNetworkPlugins/group/flannel/Localhost 0.11
320 TestNetworkPlugins/group/flannel/HairPin 0.11
321 TestNetworkPlugins/group/bridge/KubeletFlags 0.14
322 TestNetworkPlugins/group/bridge/NetCatPod 12.28
323 TestNetworkPlugins/group/kubenet/Start 51.15
324 TestNetworkPlugins/group/bridge/DNS 0.13
325 TestNetworkPlugins/group/bridge/Localhost 0.1
326 TestNetworkPlugins/group/bridge/HairPin 0.1
328 TestStartStop/group/old-k8s-version/serial/FirstStart 149.82
329 TestNetworkPlugins/group/kubenet/KubeletFlags 0.14
330 TestNetworkPlugins/group/kubenet/NetCatPod 12.26
331 TestNetworkPlugins/group/kubenet/DNS 0.14
332 TestNetworkPlugins/group/kubenet/Localhost 0.1
333 TestNetworkPlugins/group/kubenet/HairPin 0.1
335 TestStartStop/group/no-preload/serial/FirstStart 62.28
336 TestStartStop/group/no-preload/serial/DeployApp 9.33
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.74
338 TestStartStop/group/no-preload/serial/Stop 8.24
339 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
340 TestStartStop/group/no-preload/serial/SecondStart 298.16
341 TestStartStop/group/old-k8s-version/serial/DeployApp 9.31
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.82
343 TestStartStop/group/old-k8s-version/serial/Stop 8.26
344 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
345 TestStartStop/group/old-k8s-version/serial/SecondStart 486.71
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
349 TestStartStop/group/no-preload/serial/Pause 1.81
351 TestStartStop/group/embed-certs/serial/FirstStart 79.03
352 TestStartStop/group/embed-certs/serial/DeployApp 10.33
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.77
354 TestStartStop/group/embed-certs/serial/Stop 8.25
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
356 TestStartStop/group/embed-certs/serial/SecondStart 298.08
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
358 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
359 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.17
360 TestStartStop/group/old-k8s-version/serial/Pause 1.69
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.6
363 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.24
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
367 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 296.42
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
370 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
371 TestStartStop/group/embed-certs/serial/Pause 1.77
373 TestStartStop/group/newest-cni/serial/FirstStart 49.32
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
376 TestStartStop/group/newest-cni/serial/Stop 8.27
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
378 TestStartStop/group/newest-cni/serial/SecondStart 38.08
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
382 TestStartStop/group/newest-cni/serial/Pause 1.85
383 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
384 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
385 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.17
386 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.87
x
+
TestDownloadOnly/v1.16.0/json-events (21.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (21.146288784s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-001000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-001000: exit status 85 (280.527869ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-001000 | jenkins | v1.30.1 | 07 Jul 23 15:43 PDT |          |
	|         | -p download-only-001000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/07 15:43:49
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0707 15:43:49.797062   29645 out.go:296] Setting OutFile to fd 1 ...
	I0707 15:43:49.797223   29645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:43:49.797230   29645 out.go:309] Setting ErrFile to fd 2...
	I0707 15:43:49.797234   29645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:43:49.797345   29645 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	W0707 15:43:49.797479   29645 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16845-29196/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16845-29196/.minikube/config/config.json: no such file or directory
	I0707 15:43:49.799391   29645 out.go:303] Setting JSON to true
	I0707 15:43:49.818083   29645 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9794,"bootTime":1688760035,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0707 15:43:49.818176   29645 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0707 15:43:49.840113   29645 out.go:97] [download-only-001000] minikube v1.30.1 on Darwin 13.4.1
	W0707 15:43:49.840352   29645 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball: no such file or directory
	I0707 15:43:49.840414   29645 notify.go:220] Checking for updates...
	I0707 15:43:49.861911   29645 out.go:169] MINIKUBE_LOCATION=16845
	I0707 15:43:49.883846   29645 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 15:43:49.906032   29645 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0707 15:43:49.928125   29645 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0707 15:43:49.949735   29645 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	W0707 15:43:49.992870   29645 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0707 15:43:49.993340   29645 driver.go:373] Setting default libvirt URI to qemu:///system
	I0707 15:43:50.022701   29645 out.go:97] Using the hyperkit driver based on user configuration
	I0707 15:43:50.022793   29645 start.go:297] selected driver: hyperkit
	I0707 15:43:50.022808   29645 start.go:944] validating driver "hyperkit" against <nil>
	I0707 15:43:50.022994   29645 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 15:43:50.023215   29645 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16845-29196/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0707 15:43:50.165838   29645 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0707 15:43:50.169356   29645 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:43:50.169376   29645 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0707 15:43:50.169406   29645 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0707 15:43:50.171716   29645 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0707 15:43:50.171852   29645 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0707 15:43:50.171873   29645 cni.go:84] Creating CNI manager for ""
	I0707 15:43:50.171885   29645 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0707 15:43:50.171895   29645 start_flags.go:319] config:
	{Name:download-only-001000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-001000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 15:43:50.172145   29645 iso.go:125] acquiring lock: {Name:mkc26c030f62bdf6e3ab619c68665518d3e66b24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 15:43:50.193777   29645 out.go:97] Downloading VM boot image ...
	I0707 15:43:50.193874   29645 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/iso/amd64/minikube-v1.30.1-1688144767-16765-amd64.iso
	I0707 15:43:58.375035   29645 out.go:97] Starting control plane node download-only-001000 in cluster download-only-001000
	I0707 15:43:58.375076   29645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0707 15:43:58.476753   29645 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0707 15:43:58.476789   29645 cache.go:57] Caching tarball of preloaded images
	I0707 15:43:58.477117   29645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0707 15:43:58.497410   29645 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0707 15:43:58.497506   29645 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0707 15:43:58.708080   29645 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0707 15:44:07.404760   29645 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0707 15:44:07.404926   29645 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0707 15:44:07.940329   29645 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0707 15:44:07.940594   29645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/download-only-001000/config.json ...
	I0707 15:44:07.940620   29645 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/download-only-001000/config.json: {Name:mkd6d08d3bb0e3c0fa85251a976cfc269d0f1f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0707 15:44:07.940977   29645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0707 15:44:07.941310   29645 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-001000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (13.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=hyperkit : (13.0900413s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (13.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-001000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-001000: exit status 85 (300.345667ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-001000 | jenkins | v1.30.1 | 07 Jul 23 15:43 PDT |          |
	|         | -p download-only-001000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-001000 | jenkins | v1.30.1 | 07 Jul 23 15:44 PDT |          |
	|         | -p download-only-001000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/07 15:44:11
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0707 15:44:11.224885   29660 out.go:296] Setting OutFile to fd 1 ...
	I0707 15:44:11.225044   29660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:44:11.225050   29660 out.go:309] Setting ErrFile to fd 2...
	I0707 15:44:11.225054   29660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:44:11.225172   29660 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	W0707 15:44:11.225267   29660 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16845-29196/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16845-29196/.minikube/config/config.json: no such file or directory
	I0707 15:44:11.226486   29660 out.go:303] Setting JSON to true
	I0707 15:44:11.245436   29660 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9816,"bootTime":1688760035,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0707 15:44:11.245520   29660 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0707 15:44:11.266409   29660 out.go:97] [download-only-001000] minikube v1.30.1 on Darwin 13.4.1
	I0707 15:44:11.266717   29660 notify.go:220] Checking for updates...
	I0707 15:44:11.287591   29660 out.go:169] MINIKUBE_LOCATION=16845
	I0707 15:44:11.308673   29660 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 15:44:11.330411   29660 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0707 15:44:11.351744   29660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0707 15:44:11.372562   29660 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	W0707 15:44:11.414351   29660 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0707 15:44:11.415068   29660 config.go:182] Loaded profile config "download-only-001000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0707 15:44:11.415155   29660 start.go:852] api.Load failed for download-only-001000: filestore "download-only-001000": Docker machine "download-only-001000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0707 15:44:11.415311   29660 driver.go:373] Setting default libvirt URI to qemu:///system
	W0707 15:44:11.415355   29660 start.go:852] api.Load failed for download-only-001000: filestore "download-only-001000": Docker machine "download-only-001000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0707 15:44:11.443670   29660 out.go:97] Using the hyperkit driver based on existing profile
	I0707 15:44:11.443774   29660 start.go:297] selected driver: hyperkit
	I0707 15:44:11.443786   29660 start.go:944] validating driver "hyperkit" against &{Name:download-only-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-001000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 15:44:11.444189   29660 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 15:44:11.444368   29660 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16845-29196/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0707 15:44:11.452614   29660 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0707 15:44:11.456299   29660 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:44:11.456331   29660 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0707 15:44:11.458800   29660 cni.go:84] Creating CNI manager for ""
	I0707 15:44:11.458821   29660 cni.go:152] "hyperkit" driver + "docker" runtime found, recommending bridge
	I0707 15:44:11.458833   29660 start_flags.go:319] config:
	{Name:download-only-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-001000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 15:44:11.459023   29660 iso.go:125] acquiring lock: {Name:mkc26c030f62bdf6e3ab619c68665518d3e66b24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0707 15:44:11.480729   29660 out.go:97] Starting control plane node download-only-001000 in cluster download-only-001000
	I0707 15:44:11.480798   29660 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 15:44:11.568846   29660 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0707 15:44:11.568892   29660 cache.go:57] Caching tarball of preloaded images
	I0707 15:44:11.569238   29660 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0707 15:44:11.590677   29660 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0707 15:44:11.590842   29660 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0707 15:44:11.790967   29660 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4?checksum=md5:90b30902fa911e3bcfdde5b24cedf0b2 -> /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0707 15:44:21.006420   29660 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0707 15:44:21.006690   29660 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16845-29196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-001000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-001000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestBinaryMirror (0.99s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-316000 --alsologtostderr --binary-mirror http://127.0.0.1:63245 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-316000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-316000
--- PASS: TestBinaryMirror (0.99s)

                                                
                                    
x
+
TestOffline (57.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-337000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-337000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (52.003600318s)
helpers_test.go:175: Cleaning up "offline-docker-337000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-337000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-337000: (5.353441742s)
--- PASS: TestOffline (57.36s)

                                                
                                    
x
+
TestAddons/Setup (209.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-589000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-589000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m29.898763711s)
--- PASS: TestAddons/Setup (209.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 9.733348ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6rz2t" [11f2f5fb-9879-45ad-be57-a96f335698d5] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008615733s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t4vnr" [c74f9f26-bd87-4884-9d9e-acf07457e258] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007271688s
addons_test.go:316: (dbg) Run:  kubectl --context addons-589000 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-589000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-589000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.022680773s)
addons_test.go:335: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 ip
2023/07/07 15:48:11 [DEBUG] GET http://192.168.64.45:5000
addons_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-589000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-589000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-589000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7403d713-63e9-4aad-8c89-09da9f0568e5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7403d713-63e9-4aad-8c89-09da9f0568e5] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004681001s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-589000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.64.45
addons_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p addons-589000 addons disable ingress-dns --alsologtostderr -v=1: (1.963794241s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-amd64 -p addons-589000 addons disable ingress --alsologtostderr -v=1: (7.573265068s)
--- PASS: TestAddons/parallel/Ingress (20.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.4s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8z77c" [8bc8b13d-d87e-454e-8e35-eb4dac2ebc09] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009574565s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-589000
addons_test.go:817: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-589000: (5.387889987s)
--- PASS: TestAddons/parallel/InspektorGadget (10.40s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 2.517442ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-zwk48" [6628e20f-abe0-4784-8723-60693f7421cb] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008471376s
addons_test.go:391: (dbg) Run:  kubectl --context addons-589000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.40s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.02338ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-292gz" [f86f3c40-6693-4594-b826-6f22e0df063d] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010366788s
addons_test.go:449: (dbg) Run:  kubectl --context addons-589000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-589000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.821644618s)
addons_test.go:466: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 3.518685ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-589000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-589000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [47d2b547-6fb2-4278-a8dd-3cd4f572940c] Pending
helpers_test.go:344: "task-pv-pod" [47d2b547-6fb2-4278-a8dd-3cd4f572940c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [47d2b547-6fb2-4278-a8dd-3cd4f572940c] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.008016467s
addons_test.go:560: (dbg) Run:  kubectl --context addons-589000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-589000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-589000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-589000 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-589000 delete pod task-pv-pod: (1.183687575s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-589000 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-589000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-589000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-589000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8ca80cb2-4e73-4936-b1ad-bbed2c000acd] Pending
helpers_test.go:344: "task-pv-pod-restore" [8ca80cb2-4e73-4936-b1ad-bbed2c000acd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8ca80cb2-4e73-4936-b1ad-bbed2c000acd] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010190237s
addons_test.go:602: (dbg) Run:  kubectl --context addons-589000 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-589000 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-589000 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-darwin-amd64 -p addons-589000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.303969813s)
addons_test.go:618: (dbg) Run:  out/minikube-darwin-amd64 -p addons-589000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-589000 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-589000 --alsologtostderr -v=1: (1.154175764s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-tbjgj" [89e76b7a-81d2-40f7-b09a-ddf09bf31bc8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-tbjgj" [89e76b7a-81d2-40f7-b09a-ddf09bf31bc8] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.00705392s
--- PASS: TestAddons/parallel/Headlamp (13.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-p6gz9" [5d7d9cec-6e38-46fc-8091-16ec079defe9] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007237755s
addons_test.go:836: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-589000
--- PASS: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-589000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-589000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-589000
addons_test.go:148: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-589000: (5.25111875s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-589000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-589000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-589000
--- PASS: TestAddons/StoppedEnableDisable (5.70s)

                                                
                                    
x
+
TestCertOptions (41.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-977000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-977000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (35.84101552s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-977000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-977000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-977000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-977000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-977000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-977000: (5.310967201s)
--- PASS: TestCertOptions (41.50s)

                                                
                                    
x
+
TestCertExpiration (248.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-154000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-154000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (34.878731529s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-154000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-154000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (28.105213789s)
helpers_test.go:175: Cleaning up "cert-expiration-154000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-154000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-154000: (5.298147512s)
--- PASS: TestCertExpiration (248.28s)

                                                
                                    
x
+
TestDockerFlags (49.06s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-209000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-209000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (43.5200921s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-209000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-209000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-209000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-209000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-209000: (5.254087766s)
--- PASS: TestDockerFlags (49.06s)

                                                
                                    
x
+
TestForceSystemdFlag (42.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-942000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-942000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (37.194657946s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-942000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-942000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-942000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-942000: (5.305292137s)
--- PASS: TestForceSystemdFlag (42.66s)

                                                
                                    
x
+
TestForceSystemdEnv (39.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-433000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-433000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (35.918626156s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-433000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-433000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-433000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-433000: (3.442377822s)
--- PASS: TestForceSystemdEnv (39.53s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.5s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0707 16:18:36.551664   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (6.50s)

                                                
                                    
x
+
TestErrorSpam/setup (35.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-598000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-598000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 --driver=hyperkit : (35.330186491s)
--- PASS: TestErrorSpam/setup (35.33s)

                                                
                                    
x
+
TestErrorSpam/start (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 start --dry-run
--- PASS: TestErrorSpam/start (1.22s)

                                                
                                    
x
+
TestErrorSpam/status (0.44s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 status
--- PASS: TestErrorSpam/status (0.44s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 unpause
--- PASS: TestErrorSpam/unpause (1.23s)

                                                
                                    
x
+
TestErrorSpam/stop (5.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 stop: (5.22505913s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-598000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-598000 stop
--- PASS: TestErrorSpam/stop (5.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/16845-29196/.minikube/files/etc/test/nested/copy/29643/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-571000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-571000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (53.035511957s)
--- PASS: TestFunctional/serial/StartWithProxy (53.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-571000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-571000 --alsologtostderr -v=8: (38.708245634s)
functional_test.go:659: soft start took 38.708867965s for "functional-571000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-571000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 cache add registry.k8s.io/pause:3.1: (2.303000337s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 cache add registry.k8s.io/pause:3.3: (2.189702946s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 cache add registry.k8s.io/pause:latest: (1.797205545s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1175576509/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cache add minikube-local-cache-test:functional-571000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cache delete minikube-local-cache-test:functional-571000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-571000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (130.289257ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 cache reload: (1.113303894s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 kubectl -- --context functional-571000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-571000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-571000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0707 15:52:56.584351   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:56.592396   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:56.604643   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:56.625001   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:56.666073   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:56.747008   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:56.907805   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:57.230049   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:57.872128   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:52:59.153976   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:53:01.714707   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:53:06.836046   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-571000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.938177059s)
functional_test.go:757: restart took 39.938399959s for "functional-571000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-571000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 logs: (2.786949391s)
--- PASS: TestFunctional/serial/LogsCmd (2.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4041738344/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4041738344/001/logs.txt: (2.580109808s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-571000 apply -f testdata/invalidsvc.yaml
E0707 15:53:17.077362   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-571000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-571000: exit status 115 (262.952049ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.64.47:32206 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-571000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-571000 delete -f testdata/invalidsvc.yaml: (1.44748524s)
--- PASS: TestFunctional/serial/InvalidService (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 config get cpus: exit status 14 (41.145521ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 config get cpus: exit status 14 (77.271606ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-571000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-571000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 30621: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-571000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-571000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (452.738165ms)

                                                
                                                
-- stdout --
	* [functional-571000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16845
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0707 15:53:55.330109   30582 out.go:296] Setting OutFile to fd 1 ...
	I0707 15:53:55.330272   30582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:53:55.330279   30582 out.go:309] Setting ErrFile to fd 2...
	I0707 15:53:55.330283   30582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:53:55.330394   30582 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	I0707 15:53:55.331761   30582 out.go:303] Setting JSON to false
	I0707 15:53:55.350693   30582 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10400,"bootTime":1688760035,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0707 15:53:55.350780   30582 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0707 15:53:55.371821   30582 out.go:177] * [functional-571000] minikube v1.30.1 on Darwin 13.4.1
	I0707 15:53:55.413792   30582 out.go:177]   - MINIKUBE_LOCATION=16845
	I0707 15:53:55.413836   30582 notify.go:220] Checking for updates...
	I0707 15:53:55.457823   30582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 15:53:55.480734   30582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0707 15:53:55.501800   30582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0707 15:53:55.522737   30582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	I0707 15:53:55.543848   30582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0707 15:53:55.565369   30582 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 15:53:55.566022   30582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:53:55.566089   30582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 15:53:55.573683   30582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64079
	I0707 15:53:55.574080   30582 main.go:141] libmachine: () Calling .GetVersion
	I0707 15:53:55.574513   30582 main.go:141] libmachine: Using API Version  1
	I0707 15:53:55.574524   30582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 15:53:55.574739   30582 main.go:141] libmachine: () Calling .GetMachineName
	I0707 15:53:55.574839   30582 main.go:141] libmachine: (functional-571000) Calling .DriverName
	I0707 15:53:55.575018   30582 driver.go:373] Setting default libvirt URI to qemu:///system
	I0707 15:53:55.575251   30582 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:53:55.575271   30582 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 15:53:55.582202   30582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64081
	I0707 15:53:55.582536   30582 main.go:141] libmachine: () Calling .GetVersion
	I0707 15:53:55.582862   30582 main.go:141] libmachine: Using API Version  1
	I0707 15:53:55.582882   30582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 15:53:55.583094   30582 main.go:141] libmachine: () Calling .GetMachineName
	I0707 15:53:55.583195   30582 main.go:141] libmachine: (functional-571000) Calling .DriverName
	I0707 15:53:55.610753   30582 out.go:177] * Using the hyperkit driver based on existing profile
	I0707 15:53:55.631526   30582 start.go:297] selected driver: hyperkit
	I0707 15:53:55.631543   30582 start.go:944] validating driver "hyperkit" against &{Name:functional-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.47 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 15:53:55.631686   30582 start.go:955] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0707 15:53:55.656724   30582 out.go:177] 
	W0707 15:53:55.678753   30582 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0707 15:53:55.699839   30582 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-571000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-571000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-571000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (573.459338ms)

                                                
                                                
-- stdout --
	* [functional-571000] minikube v1.30.1 sur Darwin 13.4.1
	  - MINIKUBE_LOCATION=16845
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0707 15:53:54.749726   30575 out.go:296] Setting OutFile to fd 1 ...
	I0707 15:53:54.749869   30575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:53:54.749873   30575 out.go:309] Setting ErrFile to fd 2...
	I0707 15:53:54.749877   30575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 15:53:54.750011   30575 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	I0707 15:53:54.751509   30575 out.go:303] Setting JSON to false
	I0707 15:53:54.770815   30575 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10399,"bootTime":1688760035,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0707 15:53:54.771559   30575 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0707 15:53:54.797824   30575 out.go:177] * [functional-571000] minikube v1.30.1 sur Darwin 13.4.1
	I0707 15:53:54.840159   30575 notify.go:220] Checking for updates...
	I0707 15:53:54.840170   30575 out.go:177]   - MINIKUBE_LOCATION=16845
	I0707 15:53:54.884328   30575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	I0707 15:53:54.971302   30575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0707 15:53:55.028707   30575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0707 15:53:55.049840   30575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	I0707 15:53:55.070659   30575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0707 15:53:55.092183   30575 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 15:53:55.092840   30575 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:53:55.092932   30575 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 15:53:55.100951   30575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64074
	I0707 15:53:55.101325   30575 main.go:141] libmachine: () Calling .GetVersion
	I0707 15:53:55.101763   30575 main.go:141] libmachine: Using API Version  1
	I0707 15:53:55.101786   30575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 15:53:55.102004   30575 main.go:141] libmachine: () Calling .GetMachineName
	I0707 15:53:55.102110   30575 main.go:141] libmachine: (functional-571000) Calling .DriverName
	I0707 15:53:55.102296   30575 driver.go:373] Setting default libvirt URI to qemu:///system
	I0707 15:53:55.102534   30575 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 15:53:55.102563   30575 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 15:53:55.109460   30575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64076
	I0707 15:53:55.109812   30575 main.go:141] libmachine: () Calling .GetVersion
	I0707 15:53:55.110156   30575 main.go:141] libmachine: Using API Version  1
	I0707 15:53:55.110176   30575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 15:53:55.110420   30575 main.go:141] libmachine: () Calling .GetMachineName
	I0707 15:53:55.110523   30575 main.go:141] libmachine: (functional-571000) Calling .DriverName
	I0707 15:53:55.137648   30575 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0707 15:53:55.179463   30575 start.go:297] selected driver: hyperkit
	I0707 15:53:55.179480   30575 start.go:944] validating driver "hyperkit" against &{Name:functional-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1688681246-16834@sha256:849205234efc46da016f3a964268d7c76363fc521532c280d0e8a6bf1cc393b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.47 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0707 15:53:55.179643   30575 start.go:955] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0707 15:53:55.203925   30575 out.go:177] 
	W0707 15:53:55.225573   30575 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0707 15:53:55.246693   30575 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-571000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-571000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-kqlkf" [00be4c35-8410-4358-a32e-aef745a6691e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-kqlkf" [00be4c35-8410-4358-a32e-aef745a6691e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.011872711s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.64.47:30591
functional_test.go:1674: http://192.168.64.47:30591: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-kqlkf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.47:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.47:30591
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.37s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f34b1ffe-f253-4028-88c5-1695e8d132c8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011976495s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-571000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-571000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-571000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-571000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0a095a59-1e88-4534-b3b8-86afdd0e969a] Pending
helpers_test.go:344: "sp-pod" [0a095a59-1e88-4534-b3b8-86afdd0e969a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0a095a59-1e88-4534-b3b8-86afdd0e969a] Running
E0707 15:53:37.558319   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.010435938s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-571000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-571000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-571000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [312aa3f5-a613-4f7f-9dc9-9cefa29be981] Pending
helpers_test.go:344: "sp-pod" [312aa3f5-a613-4f7f-9dc9-9cefa29be981] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [312aa3f5-a613-4f7f-9dc9-9cefa29be981] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.009407007s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-571000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh -n functional-571000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 cp functional-571000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd1865158515/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh -n functional-571000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-571000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-7kdr2" [095cacce-90e2-44d9-a1ac-07fcfe5b027a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-7kdr2" [095cacce-90e2-44d9-a1ac-07fcfe5b027a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.007593727s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-571000 exec mysql-7db894d786-7kdr2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-571000 exec mysql-7db894d786-7kdr2 -- mysql -ppassword -e "show databases;": exit status 1 (113.444898ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-571000 exec mysql-7db894d786-7kdr2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-571000 exec mysql-7db894d786-7kdr2 -- mysql -ppassword -e "show databases;": exit status 1 (105.958477ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-571000 exec mysql-7db894d786-7kdr2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/29643/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo cat /etc/test/nested/copy/29643/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/29643.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo cat /etc/ssl/certs/29643.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/29643.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo cat /usr/share/ca-certificates/29643.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/296432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo cat /etc/ssl/certs/296432.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/296432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo cat /usr/share/ca-certificates/296432.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-571000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh "sudo systemctl is-active crio": exit status 1 (170.354429ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-571000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-571000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-571000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-571000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 30400: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-571000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-571000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d9425067-10fd-4a44-895e-e50feb611b3f] Pending
helpers_test.go:344: "nginx-svc" [d9425067-10fd-4a44-895e-e50feb611b3f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d9425067-10fd-4a44-895e-e50feb611b3f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005869668s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-571000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.4.195 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-571000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-571000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-571000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-fg7zr" [f0d62692-3c90-416b-9ab9-953c56727d0c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-fg7zr" [f0d62692-3c90-416b-9ab9-953c56727d0c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.008935171s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "189.436203ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "65.514289ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "187.488256ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "65.312096ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1923830551/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1688770429810064000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1923830551/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1688770429810064000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1923830551/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1688770429810064000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1923830551/001/test-1688770429810064000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (137.794641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  7 22:53 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  7 22:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  7 22:53 test-1688770429810064000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh cat /mount-9p/test-1688770429810064000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-571000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [35c4a346-47f1-4b2b-840e-9d29d62d8b8e] Pending
helpers_test.go:344: "busybox-mount" [35c4a346-47f1-4b2b-840e-9d29d62d8b8e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [35c4a346-47f1-4b2b-840e-9d29d62d8b8e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [35c4a346-47f1-4b2b-840e-9d29d62d8b8e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007307006s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-571000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1923830551/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 service list -o json
functional_test.go:1493: Took "361.961594ms" to run "out/minikube-darwin-amd64 -p functional-571000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.64.47:30504
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.64.47:30504
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3346821051/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (140.817222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.938377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3346821051/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh "sudo umount -f /mount-9p": exit status 1 (119.366229ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-571000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3346821051/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1116743527/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1116743527/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1116743527/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T" /mount1: exit status 1 (168.356438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-571000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1116743527/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1116743527/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-571000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1116743527/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-571000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-571000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-571000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-571000 image ls --format short --alsologtostderr:
I0707 15:54:24.848857   30877 out.go:296] Setting OutFile to fd 1 ...
I0707 15:54:24.849106   30877 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:24.849113   30877 out.go:309] Setting ErrFile to fd 2...
I0707 15:54:24.849117   30877 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:24.849252   30877 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
I0707 15:54:24.849964   30877 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:24.850085   30877 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:24.850466   30877 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:24.850534   30877 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:24.858319   30877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64416
I0707 15:54:24.858803   30877 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:24.859300   30877 main.go:141] libmachine: Using API Version  1
I0707 15:54:24.859313   30877 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:24.859545   30877 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:24.859655   30877 main.go:141] libmachine: (functional-571000) Calling .GetState
I0707 15:54:24.859755   30877 main.go:141] libmachine: (functional-571000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0707 15:54:24.859837   30877 main.go:141] libmachine: (functional-571000) DBG | hyperkit pid from json: 30095
I0707 15:54:24.861159   30877 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:24.861181   30877 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:24.868508   30877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64418
I0707 15:54:24.868864   30877 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:24.869298   30877 main.go:141] libmachine: Using API Version  1
I0707 15:54:24.869314   30877 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:24.869542   30877 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:24.869668   30877 main.go:141] libmachine: (functional-571000) Calling .DriverName
I0707 15:54:24.869829   30877 ssh_runner.go:195] Run: systemctl --version
I0707 15:54:24.869850   30877 main.go:141] libmachine: (functional-571000) Calling .GetSSHHostname
I0707 15:54:24.869948   30877 main.go:141] libmachine: (functional-571000) Calling .GetSSHPort
I0707 15:54:24.870048   30877 main.go:141] libmachine: (functional-571000) Calling .GetSSHKeyPath
I0707 15:54:24.870131   30877 main.go:141] libmachine: (functional-571000) Calling .GetSSHUsername
I0707 15:54:24.870239   30877 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/functional-571000/id_rsa Username:docker}
I0707 15:54:24.910753   30877 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0707 15:54:24.928857   30877 main.go:141] libmachine: Making call to close driver server
I0707 15:54:24.928895   30877 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:24.929109   30877 main.go:141] libmachine: (functional-571000) DBG | Closing plugin on server side
I0707 15:54:24.929120   30877 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:24.929131   30877 main.go:141] libmachine: Making call to close connection to plugin binary
I0707 15:54:24.929141   30877 main.go:141] libmachine: Making call to close driver server
I0707 15:54:24.929147   30877 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:24.929273   30877 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:24.929289   30877 main.go:141] libmachine: Making call to close connection to plugin binary
I0707 15:54:24.929311   30877 main.go:141] libmachine: (functional-571000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-571000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest            | 021283c8eb95b | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.27.3           | 41697ceeb70b3 | 58.4MB |
| docker.io/library/mysql                     | 5.7               | 2be84dd575ee2 | 569MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-571000 | d84f4639c53d9 | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.27.3           | 5780543258cf0 | 71.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/google-containers/addon-resizer      | functional-571000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-571000 | 25d96546e3566 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.27.3           | 08a0c939e61b7 | 121MB  |
| registry.k8s.io/kube-controller-manager     | v1.27.3           | 7cffc01dba0e1 | 112MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 4937520ae206c | 41.4MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 86b6af7dd652c | 296MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-571000 image ls --format table --alsologtostderr:
I0707 15:54:28.774200   30904 out.go:296] Setting OutFile to fd 1 ...
I0707 15:54:28.774405   30904 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:28.774410   30904 out.go:309] Setting ErrFile to fd 2...
I0707 15:54:28.774414   30904 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:28.774530   30904 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
I0707 15:54:28.775148   30904 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:28.775234   30904 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:28.775557   30904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:28.775618   30904 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:28.782414   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64448
I0707 15:54:28.782823   30904 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:28.783270   30904 main.go:141] libmachine: Using API Version  1
I0707 15:54:28.783283   30904 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:28.783526   30904 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:28.783655   30904 main.go:141] libmachine: (functional-571000) Calling .GetState
I0707 15:54:28.783746   30904 main.go:141] libmachine: (functional-571000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0707 15:54:28.783801   30904 main.go:141] libmachine: (functional-571000) DBG | hyperkit pid from json: 30095
I0707 15:54:28.784998   30904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:28.785027   30904 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:28.792130   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64450
I0707 15:54:28.792481   30904 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:28.792833   30904 main.go:141] libmachine: Using API Version  1
I0707 15:54:28.792847   30904 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:28.793056   30904 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:28.793149   30904 main.go:141] libmachine: (functional-571000) Calling .DriverName
I0707 15:54:28.793282   30904 ssh_runner.go:195] Run: systemctl --version
I0707 15:54:28.793302   30904 main.go:141] libmachine: (functional-571000) Calling .GetSSHHostname
I0707 15:54:28.793392   30904 main.go:141] libmachine: (functional-571000) Calling .GetSSHPort
I0707 15:54:28.793466   30904 main.go:141] libmachine: (functional-571000) Calling .GetSSHKeyPath
I0707 15:54:28.793542   30904 main.go:141] libmachine: (functional-571000) Calling .GetSSHUsername
I0707 15:54:28.793619   30904 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/functional-571000/id_rsa Username:docker}
I0707 15:54:28.832941   30904 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0707 15:54:28.853299   30904 main.go:141] libmachine: Making call to close driver server
I0707 15:54:28.853309   30904 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:28.853467   30904 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:28.853477   30904 main.go:141] libmachine: Making call to close connection to plugin binary
I0707 15:54:28.853483   30904 main.go:141] libmachine: Making call to close driver server
I0707 15:54:28.853491   30904 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:28.853628   30904 main.go:141] libmachine: (functional-571000) DBG | Closing plugin on server side
I0707 15:54:28.853628   30904 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:28.853639   30904 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-571000 image ls --format json --alsologtostderr:
[{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"121000000"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"58400000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c71
87d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"25d96546e35665299b4c8d046f9b4e00cd90e0217029937c7861c5a52062f1ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-571000"],"size":"30"},{"id":"021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41400000"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"71100000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":[],"repoTags":["registry.k
8s.io/kube-controller-manager:v1.27.3"],"size":"112000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-571000"],"size":"32900000"},{"id":"d84f4639c53d97110bb521e9d52f50c9d6d28b4e6f41be04e0812b7a080241f2","repoDigests":[],"repoTags":["docker.io/localhost
/my-image:functional-571000"],"size":"1240000"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"569000000"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-571000 image ls --format json --alsologtostderr:
I0707 15:54:28.628453   30900 out.go:296] Setting OutFile to fd 1 ...
I0707 15:54:28.628709   30900 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:28.628715   30900 out.go:309] Setting ErrFile to fd 2...
I0707 15:54:28.628719   30900 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:28.628874   30900 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
I0707 15:54:28.629467   30900 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:28.629563   30900 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:28.629901   30900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:28.629961   30900 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:28.636763   30900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64443
I0707 15:54:28.637168   30900 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:28.637622   30900 main.go:141] libmachine: Using API Version  1
I0707 15:54:28.637634   30900 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:28.637905   30900 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:28.638029   30900 main.go:141] libmachine: (functional-571000) Calling .GetState
I0707 15:54:28.638113   30900 main.go:141] libmachine: (functional-571000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0707 15:54:28.638175   30900 main.go:141] libmachine: (functional-571000) DBG | hyperkit pid from json: 30095
I0707 15:54:28.639417   30900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:28.639435   30900 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:28.646775   30900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64445
I0707 15:54:28.647139   30900 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:28.647510   30900 main.go:141] libmachine: Using API Version  1
I0707 15:54:28.647522   30900 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:28.647748   30900 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:28.647842   30900 main.go:141] libmachine: (functional-571000) Calling .DriverName
I0707 15:54:28.648001   30900 ssh_runner.go:195] Run: systemctl --version
I0707 15:54:28.648022   30900 main.go:141] libmachine: (functional-571000) Calling .GetSSHHostname
I0707 15:54:28.648112   30900 main.go:141] libmachine: (functional-571000) Calling .GetSSHPort
I0707 15:54:28.648192   30900 main.go:141] libmachine: (functional-571000) Calling .GetSSHKeyPath
I0707 15:54:28.648279   30900 main.go:141] libmachine: (functional-571000) Calling .GetSSHUsername
I0707 15:54:28.648368   30900 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/functional-571000/id_rsa Username:docker}
I0707 15:54:28.688437   30900 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0707 15:54:28.706307   30900 main.go:141] libmachine: Making call to close driver server
I0707 15:54:28.706340   30900 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:28.706557   30900 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:28.706569   30900 main.go:141] libmachine: Making call to close connection to plugin binary
I0707 15:54:28.706578   30900 main.go:141] libmachine: Making call to close driver server
I0707 15:54:28.706583   30900 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:28.706760   30900 main.go:141] libmachine: (functional-571000) DBG | Closing plugin on server side
I0707 15:54:28.707401   30900 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:28.707426   30900 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-571000 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "121000000"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "71100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-571000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 25d96546e35665299b4c8d046f9b4e00cd90e0217029937c7861c5a52062f1ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-571000
size: "30"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41400000"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "112000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "58400000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-571000 image ls --format yaml --alsologtostderr:
I0707 15:54:24.997211   30881 out.go:296] Setting OutFile to fd 1 ...
I0707 15:54:24.997422   30881 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:24.997428   30881 out.go:309] Setting ErrFile to fd 2...
I0707 15:54:24.997432   30881 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:24.997548   30881 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
I0707 15:54:24.998148   30881 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:24.998238   30881 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:24.998588   30881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:24.998650   30881 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:25.005647   30881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64421
I0707 15:54:25.006089   30881 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:25.006541   30881 main.go:141] libmachine: Using API Version  1
I0707 15:54:25.006554   30881 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:25.006763   30881 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:25.006861   30881 main.go:141] libmachine: (functional-571000) Calling .GetState
I0707 15:54:25.006945   30881 main.go:141] libmachine: (functional-571000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0707 15:54:25.007014   30881 main.go:141] libmachine: (functional-571000) DBG | hyperkit pid from json: 30095
I0707 15:54:25.009142   30881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:25.009173   30881 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:25.016450   30881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64423
I0707 15:54:25.016844   30881 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:25.017232   30881 main.go:141] libmachine: Using API Version  1
I0707 15:54:25.017251   30881 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:25.017472   30881 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:25.017566   30881 main.go:141] libmachine: (functional-571000) Calling .DriverName
I0707 15:54:25.017714   30881 ssh_runner.go:195] Run: systemctl --version
I0707 15:54:25.017733   30881 main.go:141] libmachine: (functional-571000) Calling .GetSSHHostname
I0707 15:54:25.017815   30881 main.go:141] libmachine: (functional-571000) Calling .GetSSHPort
I0707 15:54:25.017884   30881 main.go:141] libmachine: (functional-571000) Calling .GetSSHKeyPath
I0707 15:54:25.017967   30881 main.go:141] libmachine: (functional-571000) Calling .GetSSHUsername
I0707 15:54:25.018050   30881 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/functional-571000/id_rsa Username:docker}
I0707 15:54:25.058228   30881 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0707 15:54:25.084436   30881 main.go:141] libmachine: Making call to close driver server
I0707 15:54:25.084446   30881 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:25.084616   30881 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:25.084628   30881 main.go:141] libmachine: Making call to close connection to plugin binary
I0707 15:54:25.084634   30881 main.go:141] libmachine: Making call to close driver server
I0707 15:54:25.084642   30881 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:25.084802   30881 main.go:141] libmachine: (functional-571000) DBG | Closing plugin on server side
I0707 15:54:25.084850   30881 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:25.084859   30881 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-571000 ssh pgrep buildkitd: exit status 1 (120.877485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image build -t localhost/my-image:functional-571000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 image build -t localhost/my-image:functional-571000 testdata/build --alsologtostderr: (3.200885624s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-571000 image build -t localhost/my-image:functional-571000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 928717941aec
Removing intermediate container 928717941aec
---> f39913c25076
Step 3/3 : ADD content.txt /
---> d84f4639c53d
Successfully built d84f4639c53d
Successfully tagged localhost/my-image:functional-571000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-571000 image build -t localhost/my-image:functional-571000 testdata/build --alsologtostderr:
I0707 15:54:25.275531   30890 out.go:296] Setting OutFile to fd 1 ...
I0707 15:54:25.276308   30890 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:25.276316   30890 out.go:309] Setting ErrFile to fd 2...
I0707 15:54:25.276321   30890 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0707 15:54:25.276440   30890 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
I0707 15:54:25.277051   30890 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:25.277714   30890 config.go:182] Loaded profile config "functional-571000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0707 15:54:25.278083   30890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:25.278121   30890 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:25.285711   30890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64433
I0707 15:54:25.286176   30890 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:25.286689   30890 main.go:141] libmachine: Using API Version  1
I0707 15:54:25.286704   30890 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:25.286945   30890 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:25.287062   30890 main.go:141] libmachine: (functional-571000) Calling .GetState
I0707 15:54:25.287151   30890 main.go:141] libmachine: (functional-571000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0707 15:54:25.287223   30890 main.go:141] libmachine: (functional-571000) DBG | hyperkit pid from json: 30095
I0707 15:54:25.288504   30890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0707 15:54:25.288526   30890 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0707 15:54:25.295552   30890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64435
I0707 15:54:25.295937   30890 main.go:141] libmachine: () Calling .GetVersion
I0707 15:54:25.296261   30890 main.go:141] libmachine: Using API Version  1
I0707 15:54:25.296270   30890 main.go:141] libmachine: () Calling .SetConfigRaw
I0707 15:54:25.296509   30890 main.go:141] libmachine: () Calling .GetMachineName
I0707 15:54:25.296621   30890 main.go:141] libmachine: (functional-571000) Calling .DriverName
I0707 15:54:25.296773   30890 ssh_runner.go:195] Run: systemctl --version
I0707 15:54:25.296792   30890 main.go:141] libmachine: (functional-571000) Calling .GetSSHHostname
I0707 15:54:25.296878   30890 main.go:141] libmachine: (functional-571000) Calling .GetSSHPort
I0707 15:54:25.296963   30890 main.go:141] libmachine: (functional-571000) Calling .GetSSHKeyPath
I0707 15:54:25.297046   30890 main.go:141] libmachine: (functional-571000) Calling .GetSSHUsername
I0707 15:54:25.297129   30890 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/functional-571000/id_rsa Username:docker}
I0707 15:54:25.339291   30890 build_images.go:151] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2874730905.tar
I0707 15:54:25.339435   30890 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0707 15:54:25.347190   30890 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2874730905.tar
I0707 15:54:25.351826   30890 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2874730905.tar: stat -c "%s %y" /var/lib/minikube/build/build.2874730905.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2874730905.tar': No such file or directory
I0707 15:54:25.351887   30890 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2874730905.tar --> /var/lib/minikube/build/build.2874730905.tar (3072 bytes)
I0707 15:54:25.378986   30890 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2874730905
I0707 15:54:25.389290   30890 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2874730905 -xf /var/lib/minikube/build/build.2874730905.tar
I0707 15:54:25.396810   30890 docker.go:339] Building image: /var/lib/minikube/build/build.2874730905
I0707 15:54:25.396876   30890 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-571000 /var/lib/minikube/build/build.2874730905
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0707 15:54:28.381907   30890 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-571000 /var/lib/minikube/build/build.2874730905: (2.984937701s)
I0707 15:54:28.381978   30890 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2874730905
I0707 15:54:28.388672   30890 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2874730905.tar
I0707 15:54:28.395003   30890 build_images.go:207] Built localhost/my-image:functional-571000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2874730905.tar
I0707 15:54:28.395024   30890 build_images.go:123] succeeded building to: functional-571000
I0707 15:54:28.395029   30890 build_images.go:124] failed building to: 
I0707 15:54:28.395084   30890 main.go:141] libmachine: Making call to close driver server
I0707 15:54:28.395092   30890 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:28.395392   30890 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:28.395396   30890 main.go:141] libmachine: (functional-571000) DBG | Closing plugin on server side
I0707 15:54:28.395402   30890 main.go:141] libmachine: Making call to close connection to plugin binary
I0707 15:54:28.395413   30890 main.go:141] libmachine: Making call to close driver server
I0707 15:54:28.395434   30890 main.go:141] libmachine: (functional-571000) Calling .Close
I0707 15:54:28.395650   30890 main.go:141] libmachine: (functional-571000) DBG | Closing plugin on server side
I0707 15:54:28.395683   30890 main.go:141] libmachine: Successfully made call to close driver server
I0707 15:54:28.395692   30890 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.206953152s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-571000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image load --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr
2023/07/07 15:54:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 image load --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr: (3.289244124s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image load --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 image load --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr: (2.2864882s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-571000 docker-env) && out/minikube-darwin-amd64 status -p functional-571000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-571000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.237390223s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-571000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image load --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 image load --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr: (3.342053849s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image save gcr.io/google-containers/addon-resizer:functional-571000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
E0707 15:54:18.519541   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 image save gcr.io/google-containers/addon-resizer:functional-571000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.207168122s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image rm gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.360457663s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-571000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-571000 image save --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-571000 image save --daemon gcr.io/google-containers/addon-resizer:functional-571000 --alsologtostderr: (2.204558021s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-571000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-571000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-571000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-571000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (39.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-371000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-371000 --driver=hyperkit : (39.888081646s)
--- PASS: TestImageBuild/serial/Setup (39.89s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-371000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-371000: (2.274156973s)
--- PASS: TestImageBuild/serial/NormalBuild (2.27s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-371000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-371000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-371000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (82.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-298000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E0707 15:55:40.442425   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-298000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m22.731882448s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (82.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 addons enable ingress --alsologtostderr -v=5: (19.72830954s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (31.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-298000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-298000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.902641278s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-298000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-298000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [39ae147e-df41-49c9-9273-12683eec8209] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [39ae147e-df41-49c9-9273-12683eec8209] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.00747341s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-298000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.64.49
addons_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 addons disable ingress-dns --alsologtostderr -v=1: (1.792480279s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-298000 addons disable ingress --alsologtostderr -v=1: (7.195561408s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (31.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-146000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0707 15:57:56.566598   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:58:21.254282   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:21.260808   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:21.272244   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:21.292487   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:21.332907   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:21.414497   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:21.575299   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:21.896042   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:22.537805   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:23.819525   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:24.256331   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 15:58:26.379795   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:31.500533   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:58:41.741002   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 15:59:02.221622   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-146000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m17.741705985s)
--- PASS: TestJSONOutput/start/Command (77.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-146000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-146000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-146000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-146000 --output=json --user=testUser: (8.146391973s)
--- PASS: TestJSONOutput/stop/Command (8.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.7s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-432000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-432000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (344.052738ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"607d125d-488d-4e3d-8309-5bf149c7b977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-432000] minikube v1.30.1 on Darwin 13.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2604cc5-e0ac-48b4-adf7-da5472fb0214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16845"}}
	{"specversion":"1.0","id":"07a77117-735c-4b65-a016-21cf19bd5f1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig"}}
	{"specversion":"1.0","id":"b37f1f8e-da74-47d1-a6a3-abf65ba9a89a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2a74aade-3ec7-484a-9aaa-7c3111602e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0a1dd4c9-1c84-45ca-8101-980f9bcb550d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube"}}
	{"specversion":"1.0","id":"780e3340-76a6-4f95-8782-91770724ee5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e3ed306-336e-4dbe-8cf4-6dc02c030802","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-432000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-432000
--- PASS: TestErrorJSONOutput (0.70s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (17.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (16.150509766s)
--- PASS: TestMountStart/serial/StartWithMountFirst (17.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-850000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-850000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (17.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-860000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-860000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (16.30955307s)
--- PASS: TestMountStart/serial/StartWithMountSecond (17.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-860000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-860000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.4s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-850000 --alsologtostderr -v=5
E0707 16:01:05.108491   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-850000 --alsologtostderr -v=5: (2.395297972s)
--- PASS: TestMountStart/serial/DeleteFirst (2.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-860000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-860000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-860000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-860000: (2.216738605s)
--- PASS: TestMountStart/serial/Stop (2.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (41.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-860000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-860000: (40.54785633s)
--- PASS: TestMountStart/serial/RestartStopped (41.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-860000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-860000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-136000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0707 16:02:13.481546   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:13.486818   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:13.497123   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:13.518888   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:13.559261   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:13.640679   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:13.801276   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:14.121628   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:14.762006   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:16.042174   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:18.603041   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:23.723958   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:33.964662   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:54.447231   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:02:56.566589   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 16:03:21.257800   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 16:03:35.410058   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-136000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m42.532763973s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-136000 -- rollout status deployment/busybox: (4.022743174s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-jbj7z -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-pgvw7 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-jbj7z -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-pgvw7 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-jbj7z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-pgvw7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-jbj7z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-jbj7z -- sh -c "ping -c 1 192.168.64.1"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-pgvw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-136000 -- exec busybox-67b7f59bb-pgvw7 -- sh -c "ping -c 1 192.168.64.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-136000 -v 3 --alsologtostderr
E0707 16:03:48.953626   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-136000 -v 3 --alsologtostderr: (37.074669558s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.37s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp testdata/cp-test.txt multinode-136000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1161446772/001/cp-test_multinode-136000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000:/home/docker/cp-test.txt multinode-136000-m02:/home/docker/cp-test_multinode-136000_multinode-136000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m02 "sudo cat /home/docker/cp-test_multinode-136000_multinode-136000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000:/home/docker/cp-test.txt multinode-136000-m03:/home/docker/cp-test_multinode-136000_multinode-136000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m03 "sudo cat /home/docker/cp-test_multinode-136000_multinode-136000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp testdata/cp-test.txt multinode-136000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1161446772/001/cp-test_multinode-136000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000-m02:/home/docker/cp-test.txt multinode-136000:/home/docker/cp-test_multinode-136000-m02_multinode-136000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000 "sudo cat /home/docker/cp-test_multinode-136000-m02_multinode-136000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000-m02:/home/docker/cp-test.txt multinode-136000-m03:/home/docker/cp-test_multinode-136000-m02_multinode-136000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m03 "sudo cat /home/docker/cp-test_multinode-136000-m02_multinode-136000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp testdata/cp-test.txt multinode-136000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1161446772/001/cp-test_multinode-136000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000-m03:/home/docker/cp-test.txt multinode-136000:/home/docker/cp-test_multinode-136000-m03_multinode-136000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000 "sudo cat /home/docker/cp-test_multinode-136000-m03_multinode-136000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 cp multinode-136000-m03:/home/docker/cp-test.txt multinode-136000-m02:/home/docker/cp-test_multinode-136000-m03_multinode-136000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 ssh -n multinode-136000-m02 "sudo cat /home/docker/cp-test_multinode-136000-m03_multinode-136000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-136000 node stop m03: (2.176835467s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-136000 status: exit status 7 (235.441734ms)

                                                
                                                
-- stdout --
	multinode-136000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-136000 status --alsologtostderr: exit status 7 (232.253138ms)

                                                
                                                
-- stdout --
	multinode-136000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0707 16:04:28.072063   32053 out.go:296] Setting OutFile to fd 1 ...
	I0707 16:04:28.072283   32053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:04:28.072288   32053 out.go:309] Setting ErrFile to fd 2...
	I0707 16:04:28.072292   32053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:04:28.072408   32053 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	I0707 16:04:28.072591   32053 out.go:303] Setting JSON to false
	I0707 16:04:28.072619   32053 mustload.go:65] Loading cluster: multinode-136000
	I0707 16:04:28.073477   32053 notify.go:220] Checking for updates...
	I0707 16:04:28.073920   32053 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:04:28.073940   32053 status.go:255] checking status of multinode-136000 ...
	I0707 16:04:28.074289   32053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:04:28.074351   32053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:04:28.081240   32053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:65398
	I0707 16:04:28.081563   32053 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:04:28.081976   32053 main.go:141] libmachine: Using API Version  1
	I0707 16:04:28.081987   32053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:04:28.082228   32053 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:04:28.082339   32053 main.go:141] libmachine: (multinode-136000) Calling .GetState
	I0707 16:04:28.082420   32053 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:04:28.082481   32053 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid from json: 31677
	I0707 16:04:28.083684   32053 status.go:330] multinode-136000 host status = "Running" (err=<nil>)
	I0707 16:04:28.083700   32053 host.go:66] Checking if "multinode-136000" exists ...
	I0707 16:04:28.083939   32053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:04:28.083957   32053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:04:28.090673   32053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:65400
	I0707 16:04:28.091011   32053 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:04:28.091364   32053 main.go:141] libmachine: Using API Version  1
	I0707 16:04:28.091376   32053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:04:28.091601   32053 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:04:28.091704   32053 main.go:141] libmachine: (multinode-136000) Calling .GetIP
	I0707 16:04:28.091792   32053 host.go:66] Checking if "multinode-136000" exists ...
	I0707 16:04:28.092052   32053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:04:28.092075   32053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:04:28.098720   32053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:65402
	I0707 16:04:28.099053   32053 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:04:28.099395   32053 main.go:141] libmachine: Using API Version  1
	I0707 16:04:28.099410   32053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:04:28.099626   32053 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:04:28.099724   32053 main.go:141] libmachine: (multinode-136000) Calling .DriverName
	I0707 16:04:28.099853   32053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0707 16:04:28.099875   32053 main.go:141] libmachine: (multinode-136000) Calling .GetSSHHostname
	I0707 16:04:28.099955   32053 main.go:141] libmachine: (multinode-136000) Calling .GetSSHPort
	I0707 16:04:28.100038   32053 main.go:141] libmachine: (multinode-136000) Calling .GetSSHKeyPath
	I0707 16:04:28.100115   32053 main.go:141] libmachine: (multinode-136000) Calling .GetSSHUsername
	I0707 16:04:28.100201   32053 sshutil.go:53] new ssh client: &{IP:192.168.64.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000/id_rsa Username:docker}
	I0707 16:04:28.138387   32053 ssh_runner.go:195] Run: systemctl --version
	I0707 16:04:28.143753   32053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0707 16:04:28.153308   32053 kubeconfig.go:92] found "multinode-136000" server: "https://192.168.64.55:8443"
	I0707 16:04:28.153329   32053 api_server.go:166] Checking apiserver status ...
	I0707 16:04:28.153368   32053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0707 16:04:28.162160   32053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1879/cgroup
	I0707 16:04:28.168371   32053 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod10d234a603360886d3e49d7f2ebd7116/69a988d9753c359529becae1d314a82c0accca1e1ea345ac5c5c37e00a889da2"
	I0707 16:04:28.168425   32053 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod10d234a603360886d3e49d7f2ebd7116/69a988d9753c359529becae1d314a82c0accca1e1ea345ac5c5c37e00a889da2/freezer.state
	I0707 16:04:28.174899   32053 api_server.go:204] freezer state: "THAWED"
	I0707 16:04:28.174909   32053 api_server.go:253] Checking apiserver healthz at https://192.168.64.55:8443/healthz ...
	I0707 16:04:28.178919   32053 api_server.go:279] https://192.168.64.55:8443/healthz returned 200:
	ok
	I0707 16:04:28.178929   32053 status.go:421] multinode-136000 apiserver status = Running (err=<nil>)
	I0707 16:04:28.178937   32053 status.go:257] multinode-136000 status: &{Name:multinode-136000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0707 16:04:28.178953   32053 status.go:255] checking status of multinode-136000-m02 ...
	I0707 16:04:28.179219   32053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:04:28.179241   32053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:04:28.186243   32053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:65406
	I0707 16:04:28.186664   32053 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:04:28.187011   32053 main.go:141] libmachine: Using API Version  1
	I0707 16:04:28.187022   32053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:04:28.187230   32053 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:04:28.187340   32053 main.go:141] libmachine: (multinode-136000-m02) Calling .GetState
	I0707 16:04:28.187424   32053 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:04:28.187493   32053 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid from json: 31727
	I0707 16:04:28.188702   32053 status.go:330] multinode-136000-m02 host status = "Running" (err=<nil>)
	I0707 16:04:28.188712   32053 host.go:66] Checking if "multinode-136000-m02" exists ...
	I0707 16:04:28.188967   32053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:04:28.188987   32053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:04:28.195959   32053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:65408
	I0707 16:04:28.196305   32053 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:04:28.196640   32053 main.go:141] libmachine: Using API Version  1
	I0707 16:04:28.196652   32053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:04:28.196856   32053 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:04:28.196952   32053 main.go:141] libmachine: (multinode-136000-m02) Calling .GetIP
	I0707 16:04:28.197029   32053 host.go:66] Checking if "multinode-136000-m02" exists ...
	I0707 16:04:28.197287   32053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:04:28.197314   32053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:04:28.204098   32053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:65410
	I0707 16:04:28.204444   32053 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:04:28.204790   32053 main.go:141] libmachine: Using API Version  1
	I0707 16:04:28.204802   32053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:04:28.205025   32053 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:04:28.205122   32053 main.go:141] libmachine: (multinode-136000-m02) Calling .DriverName
	I0707 16:04:28.205248   32053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0707 16:04:28.205261   32053 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHHostname
	I0707 16:04:28.205345   32053 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHPort
	I0707 16:04:28.205410   32053 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHKeyPath
	I0707 16:04:28.205502   32053 main.go:141] libmachine: (multinode-136000-m02) Calling .GetSSHUsername
	I0707 16:04:28.205582   32053 sshutil.go:53] new ssh client: &{IP:192.168.64.56 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16845-29196/.minikube/machines/multinode-136000-m02/id_rsa Username:docker}
	I0707 16:04:28.244491   32053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0707 16:04:28.252991   32053 status.go:257] multinode-136000-m02 status: &{Name:multinode-136000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0707 16:04:28.253006   32053 status.go:255] checking status of multinode-136000-m03 ...
	I0707 16:04:28.253259   32053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:04:28.253283   32053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:04:28.260111   32053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:65413
	I0707 16:04:28.260457   32053 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:04:28.260817   32053 main.go:141] libmachine: Using API Version  1
	I0707 16:04:28.260837   32053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:04:28.261055   32053 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:04:28.261174   32053 main.go:141] libmachine: (multinode-136000-m03) Calling .GetState
	I0707 16:04:28.261258   32053 main.go:141] libmachine: (multinode-136000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:04:28.261325   32053 main.go:141] libmachine: (multinode-136000-m03) DBG | hyperkit pid from json: 31830
	I0707 16:04:28.262536   32053 main.go:141] libmachine: (multinode-136000-m03) DBG | hyperkit pid 31830 missing from process table
	I0707 16:04:28.262575   32053 status.go:330] multinode-136000-m03 host status = "Stopped" (err=<nil>)
	I0707 16:04:28.262586   32053 status.go:343] host is not running, skipping remaining checks
	I0707 16:04:28.262591   32053 status.go:257] multinode-136000-m03 status: &{Name:multinode-136000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.65s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-136000 node start m03 --alsologtostderr: (29.002414091s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status
E0707 16:04:57.332280   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (191.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-136000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-136000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-136000: (18.429655336s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-136000 --wait=true -v=8 --alsologtostderr
E0707 16:07:13.489205   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:07:41.177088   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:07:56.571512   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-136000 --wait=true -v=8 --alsologtostderr: (2m52.564017589s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-136000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (191.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-136000 node delete m03: (2.667896772s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 stop
E0707 16:08:21.264979   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-136000 stop: (16.33070347s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-136000 status: exit status 7 (61.515395ms)

                                                
                                                
-- stdout --
	multinode-136000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-136000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-136000 status --alsologtostderr: exit status 7 (61.180427ms)

                                                
                                                
-- stdout --
	multinode-136000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0707 16:08:28.125509   32265 out.go:296] Setting OutFile to fd 1 ...
	I0707 16:08:28.125684   32265 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:08:28.125689   32265 out.go:309] Setting ErrFile to fd 2...
	I0707 16:08:28.125693   32265 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0707 16:08:28.125803   32265 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16845-29196/.minikube/bin
	I0707 16:08:28.125993   32265 out.go:303] Setting JSON to false
	I0707 16:08:28.126031   32265 mustload.go:65] Loading cluster: multinode-136000
	I0707 16:08:28.126089   32265 notify.go:220] Checking for updates...
	I0707 16:08:28.126323   32265 config.go:182] Loaded profile config "multinode-136000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0707 16:08:28.126338   32265 status.go:255] checking status of multinode-136000 ...
	I0707 16:08:28.126678   32265 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.126730   32265 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.133494   32265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49209
	I0707 16:08:28.133835   32265 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.134272   32265 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.134284   32265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.134483   32265 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.134589   32265 main.go:141] libmachine: (multinode-136000) Calling .GetState
	I0707 16:08:28.134673   32265 main.go:141] libmachine: (multinode-136000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.134740   32265 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid from json: 32119
	I0707 16:08:28.135633   32265 main.go:141] libmachine: (multinode-136000) DBG | hyperkit pid 32119 missing from process table
	I0707 16:08:28.135681   32265 status.go:330] multinode-136000 host status = "Stopped" (err=<nil>)
	I0707 16:08:28.135692   32265 status.go:343] host is not running, skipping remaining checks
	I0707 16:08:28.135697   32265 status.go:257] multinode-136000 status: &{Name:multinode-136000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0707 16:08:28.135716   32265 status.go:255] checking status of multinode-136000-m02 ...
	I0707 16:08:28.135982   32265 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0707 16:08:28.136008   32265 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0707 16:08:28.142826   32265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49211
	I0707 16:08:28.143162   32265 main.go:141] libmachine: () Calling .GetVersion
	I0707 16:08:28.143516   32265 main.go:141] libmachine: Using API Version  1
	I0707 16:08:28.143531   32265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0707 16:08:28.143789   32265 main.go:141] libmachine: () Calling .GetMachineName
	I0707 16:08:28.143919   32265 main.go:141] libmachine: (multinode-136000-m02) Calling .GetState
	I0707 16:08:28.144104   32265 main.go:141] libmachine: (multinode-136000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0707 16:08:28.144181   32265 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid from json: 32151
	I0707 16:08:28.145134   32265 main.go:141] libmachine: (multinode-136000-m02) DBG | hyperkit pid 32151 missing from process table
	I0707 16:08:28.145156   32265 status.go:330] multinode-136000-m02 host status = "Stopped" (err=<nil>)
	I0707 16:08:28.145166   32265 status.go:343] host is not running, skipping remaining checks
	I0707 16:08:28.145178   32265 status.go:257] multinode-136000-m02 status: &{Name:multinode-136000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-136000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-136000-m02 --driver=hyperkit 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-136000-m02 --driver=hyperkit : exit status 14 (432.484604ms)

                                                
                                                
-- stdout --
	* [multinode-136000-m02] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16845
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-136000-m02' is duplicated with machine name 'multinode-136000-m02' in profile 'multinode-136000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-136000-m03 --driver=hyperkit 
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-136000-m03 --driver=hyperkit : (39.599528841s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-136000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-136000: exit status 80 (273.569871ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-136000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-136000-m03 already exists in multinode-136000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-136000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-136000-m03: (5.304030356s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.65s)

                                                
                                    
x
+
TestPreload (155.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0707 16:12:13.495183   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:12:56.578402   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m17.216594387s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-216000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-216000 image pull gcr.io/k8s-minikube/busybox: (2.276578358s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-216000
E0707 16:13:21.271562   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-216000: (8.211498752s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-216000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-216000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m2.050531859s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-216000 image list
helpers_test.go:175: Cleaning up "test-preload-216000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-216000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-216000: (5.29976953s)
--- PASS: TestPreload (155.19s)

                                                
                                    
x
+
TestScheduledStopUnix (107.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-776000 --memory=2048 --driver=hyperkit 
E0707 16:14:44.329072   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-776000 --memory=2048 --driver=hyperkit : (35.949541223s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-776000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-776000 -n scheduled-stop-776000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-776000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-776000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-776000 -n scheduled-stop-776000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-776000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-776000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-776000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-776000: exit status 7 (57.56086ms)

                                                
                                                
-- stdout --
	scheduled-stop-776000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-776000 -n scheduled-stop-776000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-776000 -n scheduled-stop-776000: exit status 7 (54.011275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-776000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-776000
--- PASS: TestScheduledStopUnix (107.30s)

                                                
                                    
x
+
TestSkaffold (112.28s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3695267139 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-651000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-651000 --memory=2600 --driver=hyperkit : (34.981570994s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3695267139 run --minikube-profile skaffold-651000 --kube-context skaffold-651000 --status-check=true --port-forward=false --interactive=false
E0707 16:17:13.501237   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:17:56.585844   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3695267139 run --minikube-profile skaffold-651000 --kube-context skaffold-651000 --status-check=true --port-forward=false --interactive=false: (58.119017063s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6f7f67c659-vcdbl" [722dc50d-a3c1-417d-9c31-1d68963abfc6] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012565994s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-65cf957896-xxhc4" [cb4a159d-6387-4dbd-8d3e-0b2323d191ea] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005727075s
helpers_test.go:175: Cleaning up "skaffold-651000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-651000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-651000: (5.299400694s)
--- PASS: TestSkaffold (112.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (175.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3837259096.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=hyperkit 
E0707 16:22:13.485408   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3837259096.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=hyperkit : (1m40.129554439s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-847000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0707 16:22:56.569389   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 16:23:00.102808   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:00.108171   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:00.120424   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:00.142549   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:00.184794   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:00.266193   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:00.426858   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:00.749045   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:01.390681   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:02.671564   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:05.232570   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:10.353949   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:20.596378   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:23:21.262098   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-847000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m8.290186252s)
helpers_test.go:175: Cleaning up "running-upgrade-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-847000
E0707 16:23:41.077240   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-847000: (5.301791919s)
--- PASS: TestRunningBinaryUpgrade (175.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (141.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m14.195982734s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-361000
version_upgrade_test.go:239: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-361000: (2.218583897s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-361000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-361000 status --format={{.Host}}: exit status 7 (65.277254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:255: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperkit : (33.948572334s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-361000 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (644.247785ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-361000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16845
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-361000
	    minikube start -p kubernetes-upgrade-361000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3610002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-361000 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:287: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperkit : (24.848992569s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-361000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-361000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-361000: (5.3024139s)
--- PASS: TestKubernetesUpgrade (141.27s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.86s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E0707 16:18:21.277390   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=16845
- KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2114815408/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2114815408/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2114815408/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2114815408/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.86s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.56s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=16845
- KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3961471086/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3961471086/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3961471086/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3961471086/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (162.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3556802569.exe start -p stopped-upgrade-246000 --memory=2200 --vm-driver=hyperkit 
E0707 16:24:22.039634   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3556802569.exe start -p stopped-upgrade-246000 --memory=2200 --vm-driver=hyperkit : (1m32.624185873s)
version_upgrade_test.go:204: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3556802569.exe -p stopped-upgrade-246000 stop
E0707 16:25:43.962392   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
version_upgrade_test.go:204: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3556802569.exe -p stopped-upgrade-246000 stop: (8.082903085s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-246000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0707 16:25:59.631922   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-246000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m1.652935015s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (162.36s)

                                                
                                    
x
+
TestPause/serial/Start (61.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-550000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-550000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m1.966831506s)
--- PASS: TestPause/serial/Start (61.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-246000
version_upgrade_test.go:218: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-246000: (3.256738462s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-891000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-891000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (477.892646ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-891000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16845
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16845-29196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16845-29196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-891000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-891000 --driver=hyperkit : (38.747077295s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-891000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-550000 --alsologtostderr -v=1 --driver=hyperkit 
E0707 16:27:13.491385   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-550000 --alsologtostderr -v=1 --driver=hyperkit : (46.100379157s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-891000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-891000 --no-kubernetes --driver=hyperkit : (5.594170835s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-891000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-891000 status -o json: exit status 2 (131.212337ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-891000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-891000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-891000: (2.494937843s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-891000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-891000 --no-kubernetes --driver=hyperkit : (18.6628925s)
--- PASS: TestNoKubernetes/serial/Start (18.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-550000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-550000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-550000 --output=json --layout=cluster: exit status 2 (139.943563ms)

                                                
                                                
-- stdout --
	{"Name":"pause-550000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-550000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-550000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.49s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.61s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-550000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.61s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-550000 --alsologtostderr -v=5
E0707 16:27:56.575596   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-550000 --alsologtostderr -v=5: (5.259491648s)
--- PASS: TestPause/serial/DeletePaused (5.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0707 16:28:00.110063   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
--- PASS: TestPause/serial/VerifyDeletedResources (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (51.755113711s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-891000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-891000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (110.440843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-891000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-891000: (8.237158307s)
--- PASS: TestNoKubernetes/serial/Stop (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (16.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-891000 --driver=hyperkit 
E0707 16:28:21.267558   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 16:28:27.808039   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-891000 --driver=hyperkit : (16.814690295s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (16.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-891000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-891000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (113.72479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (58.57245519s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qvbz8" [6f970e3a-cceb-465b-936f-4ecdcbfb0d13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-qvbz8" [6f970e3a-cceb-465b-936f-4ecdcbfb0d13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.00611319s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m10.558755543s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-llmdz" [a1f8e852-a411-449b-8572-7a726d6180ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012374341s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-f64r5" [9b572006-384f-4064-af17-b718d26a2fd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-f64r5" [9b572006-384f-4064-af17-b718d26a2fd8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.005757376s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (58.970964611s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q9kjg" [642ede76-233f-4e2f-bac7-670cbb7a85c7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.011512212s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-62z9d" [6779f7d8-08b8-482a-a25d-dcecb08bd5d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-62z9d" [6779f7d8-08b8-482a-a25d-dcecb08bd5d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005942452s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mtj2n" [5a0bad4b-5ae1-4f38-acf1-e7d177ae3b1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-mtj2n" [5a0bad4b-5ae1-4f38-acf1-e7d177ae3b1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.006289891s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (53.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (53.758643154s)
--- PASS: TestNetworkPlugins/group/false/Start (53.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (52.521794939s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hj2wl" [be16b782-31aa-47cf-af77-a7389715c2ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hj2wl" [be16b782-31aa-47cf-af77-a7389715c2ac] Running
E0707 16:32:13.498666   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.008040646s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-r2snq" [c76e59f8-6546-4bd4-bef3-867bd45d2601] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-r2snq" [c76e59f8-6546-4bd4-bef3-867bd45d2601] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.006091438s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (58.954094055s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0707 16:33:21.273836   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m0.884385757s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kts82" [f7572361-bfb7-4ba0-bc5b-b7ef9d5f828a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.011176997s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rxxrp" [a0443392-1d2d-4673-b488-fbe0588afd0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rxxrp" [a0443392-1d2d-4673-b488-fbe0588afd0d] Running
E0707 16:33:52.728400   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:52.734809   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:52.746387   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:52.767557   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:52.808622   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:52.889046   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:53.049289   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:53.369638   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:33:54.009919   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.005855163s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-n9qd7" [86b18195-1779-43b0-ae3a-8c2f4f6fd6b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0707 16:34:02.974108   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-n9qd7" [86b18195-1779-43b0-ae3a-8c2f4f6fd6b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005405802s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (51.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0707 16:34:13.216531   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-114000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (51.151654301s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (51.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-155000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0707 16:34:31.975383   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:34:33.697455   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:34:34.536411   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:34:39.658059   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:34:49.898410   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-155000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m29.815230975s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-114000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-114000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-g2xz5" [886396b6-9adc-4fbd-b5ed-5d380c750eb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0707 16:35:10.379192   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-g2xz5" [886396b6-9adc-4fbd-b5ed-5d380c750eb8] Running
E0707 16:35:14.660069   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.007257775s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-114000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-114000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0707 16:50:52.534492   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:51:06.864364   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:51:35.349912   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:35:34.235012   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:34.240370   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:34.250930   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:34.272372   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:34.313375   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:34.394181   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:34.555129   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:34.875148   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:35.516380   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:36.796414   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:39.357027   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:44.476501   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:35:51.335627   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:35:54.716779   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:36:06.787819   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:06.793310   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:06.804804   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:06.826143   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:06.866359   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:06.947715   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:07.107864   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:07.429795   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:08.071195   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:09.352037   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:11.913569   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:15.198267   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:36:17.034424   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:36:27.275097   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.3: (1m2.279408471s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-836000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1e35c45d-473c-4845-bbc7-74129169e87d] Pending
helpers_test.go:344: "busybox" [1e35c45d-473c-4845-bbc7-74129169e87d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0707 16:36:36.575919   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1e35c45d-473c-4845-bbc7-74129169e87d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.012827356s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-836000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-836000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-836000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-836000 --alsologtostderr -v=3
E0707 16:36:47.756644   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-836000 --alsologtostderr -v=3: (8.236332571s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (52.413367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-836000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:36:56.160214   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.3: (4m58.00635342s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-155000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f680ee43-986b-4de7-bf32-41410560131e] Pending
helpers_test.go:344: "busybox" [f680ee43-986b-4de7-bf32-41410560131e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0707 16:37:03.405400   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:03.411748   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:03.422077   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:03.443415   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:03.484188   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:03.566348   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:03.726767   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:04.048471   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:04.688787   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f680ee43-986b-4de7-bf32-41410560131e] Running
E0707 16:37:05.969634   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:08.543597   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.017921035s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-155000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-155000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-155000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-155000 --alsologtostderr -v=3
E0707 16:37:13.256339   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:37:13.498807   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:37:13.665992   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-155000 --alsologtostderr -v=3: (8.261445024s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-155000 -n old-k8s-version-155000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-155000 -n old-k8s-version-155000: exit status 7 (51.531144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-155000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (486.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-155000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0707 16:37:23.907268   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:28.719218   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:37:30.750664   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:30.755908   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:30.767578   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:30.787664   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:30.828083   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:30.910208   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:31.071825   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:31.392258   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:32.034102   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:33.315584   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:35.876357   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:40.996956   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:44.389807   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:37:51.237317   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:37:56.580946   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 16:38:00.117788   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:38:11.719992   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:38:18.083285   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:38:21.275550   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 16:38:25.351844   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:38:33.870254   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:33.875424   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:33.886901   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:33.908366   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:33.949202   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:34.029962   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:34.191462   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:34.513745   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:35.155538   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:36.437116   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:38.997738   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:44.120000   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:38:50.641369   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:38:52.683116   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:38:52.727135   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:38:54.361887   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:39:02.452854   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:02.459285   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:02.470852   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:02.491344   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:02.532161   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:02.613674   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:02.774145   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:03.095308   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:03.735814   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:05.018020   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:07.580313   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:12.701859   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:14.842532   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:39:20.419709   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:39:22.943086   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:23.175729   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:39:29.411857   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:39:43.425475   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:39:47.273760   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:39:55.804080   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:39:57.100933   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:40:03.621640   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:03.627964   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:03.638490   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:03.659656   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:03.701741   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:03.782857   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:03.944434   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:04.266155   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:04.906736   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:06.186945   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:08.749187   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:13.870570   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:14.605564   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:40:24.111321   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:40:24.387021   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:40:34.237760   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:40:44.592551   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:41:01.927116   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
E0707 16:41:06.793166   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:41:17.727404   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:41:25.553664   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:41:34.485029   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:41:46.309376   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-155000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (8m6.559943789s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-155000 -n old-k8s-version-155000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (486.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-2xlqv" [f3ceb861-8e7c-403c-b2d4-da5a408952e2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013598729s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-2xlqv" [f3ceb861-8e7c-403c-b2d4-da5a408952e2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00555347s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-836000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-836000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-836000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-836000 -n no-preload-836000: exit status 2 (144.074206ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-836000 -n no-preload-836000: exit status 2 (143.661941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-836000 --alsologtostderr -v=1
E0707 16:42:03.411223   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-836000 -n no-preload-836000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-715000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:42:13.505702   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:42:30.758131   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:42:31.118322   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:42:39.647181   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 16:42:47.477280   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:42:56.587951   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 16:42:58.450958   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:43:00.122702   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:43:21.281668   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-715000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.3: (1m19.034180994s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-715000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2753aa12-b3ea-4356-8d16-9a5ecb281343] Pending
helpers_test.go:344: "busybox" [2753aa12-b3ea-4356-8d16-9a5ecb281343] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2753aa12-b3ea-4356-8d16-9a5ecb281343] Running
E0707 16:43:33.877426   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.012379807s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-715000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-715000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-715000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-715000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-715000 --alsologtostderr -v=3: (8.246108774s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-715000 -n embed-certs-715000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-715000 -n embed-certs-715000: exit status 7 (52.361997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-715000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-715000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:43:52.731626   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:44:01.572044   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
E0707 16:44:02.460492   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:44:29.417807   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:44:30.153175   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/bridge-114000/client.crt: no such file or directory
E0707 16:45:03.627774   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-715000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.3: (4m57.933769958s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-715000 -n embed-certs-715000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vlt6w" [c400cd88-3cef-4bfa-a18f-5f4c092afeb9] Running
E0707 16:45:31.321717   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011131714s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vlt6w" [c400cd88-3cef-4bfa-a18f-5f4c092afeb9] Running
E0707 16:45:34.243390   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004338336s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-155000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-155000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-155000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-155000 -n old-k8s-version-155000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-155000 -n old-k8s-version-155000: exit status 2 (145.546948ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-155000 -n old-k8s-version-155000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-155000 -n old-k8s-version-155000: exit status 2 (143.614287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-155000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-155000 -n old-k8s-version-155000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-155000 -n old-k8s-version-155000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-663000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:46:06.797823   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/custom-flannel-114000/client.crt: no such file or directory
E0707 16:46:35.285016   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:46:35.291046   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:46:35.302187   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:46:35.324312   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:46:35.366448   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:46:35.446783   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:46:35.607998   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:46:35.928323   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-663000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.3: (51.59800874s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-663000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ba2ee96-9c7b-4d42-9f08-db4350b97512] Pending
E0707 16:46:36.650059   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5ba2ee96-9c7b-4d42-9f08-db4350b97512] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0707 16:46:37.932273   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5ba2ee96-9c7b-4d42-9f08-db4350b97512] Running
E0707 16:46:40.493010   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.012885156s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-663000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-663000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0707 16:46:45.613522   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-663000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-663000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-663000 --alsologtostderr -v=3: (8.241542369s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000: exit status 7 (51.61488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-663000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-663000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:46:55.853918   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:47:01.318225   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:01.323301   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:01.334740   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:01.355683   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:01.397187   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:01.478209   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:01.638808   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:01.959847   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:02.600040   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:03.417995   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/false-114000/client.crt: no such file or directory
E0707 16:47:03.881539   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:06.443869   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:11.565170   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:13.512766   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:47:16.334828   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:47:21.807628   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:30.764677   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/enable-default-cni-114000/client.crt: no such file or directory
E0707 16:47:42.288878   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:47:56.593609   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/addons-589000/client.crt: no such file or directory
E0707 16:47:57.296034   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:48:00.129315   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/skaffold-651000/client.crt: no such file or directory
E0707 16:48:04.346761   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 16:48:21.286129   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/functional-571000/client.crt: no such file or directory
E0707 16:48:23.250621   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
E0707 16:48:33.882382   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/flannel-114000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-663000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.3: (4m56.274809059s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nds9t" [6a6d1632-6761-4bf3-b7e4-5c72c1de404a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010159206s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nds9t" [6a6d1632-6761-4bf3-b7e4-5c72c1de404a] Running
E0707 16:48:52.738756   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005954263s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-715000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-715000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-715000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-715000 -n embed-certs-715000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-715000 -n embed-certs-715000: exit status 2 (137.040287ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-715000 -n embed-certs-715000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-715000 -n embed-certs-715000: exit status 2 (138.623411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-715000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-715000 -n embed-certs-715000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-715000 -n embed-certs-715000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-488000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:49:19.218900   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
E0707 16:49:29.423342   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kindnet-114000/client.crt: no such file or directory
E0707 16:49:45.173362   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-488000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.3: (49.315105312s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-488000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-488000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-488000 --alsologtostderr -v=3: (8.269640524s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-488000 -n newest-cni-488000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-488000 -n newest-cni-488000: exit status 7 (52.604238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-488000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-488000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.3
E0707 16:50:03.634516   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/kubenet-114000/client.crt: no such file or directory
E0707 16:50:15.795396   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/auto-114000/client.crt: no such file or directory
E0707 16:50:34.308533   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-488000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.3: (37.934432988s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-488000 -n newest-cni-488000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-488000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-488000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-488000 -n newest-cni-488000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-488000 -n newest-cni-488000: exit status 2 (144.377244ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-488000 -n newest-cni-488000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-488000 -n newest-cni-488000: exit status 2 (141.606461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-488000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-488000 -n newest-cni-488000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-488000 -n newest-cni-488000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-sx269" [418d15cb-546f-46c9-91c0-386333b306da] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011606821s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-sx269" [418d15cb-546f-46c9-91c0-386333b306da] Running
E0707 16:51:56.629141   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/ingress-addon-legacy-298000/client.crt: no such file or directory
E0707 16:51:57.362602   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/calico-114000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006626054s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-663000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0707 16:52:01.383933   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/old-k8s-version-155000/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-663000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-663000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000: exit status 2 (145.091777ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000: exit status 2 (147.890276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-663000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000
E0707 16:52:03.122160   29643 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16845-29196/.minikube/profiles/no-preload-836000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-663000 -n default-k8s-diff-port-663000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.87s)

                                                
                                    

Test skip (19/317)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-114000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-114000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-114000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-114000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-114000"

                                                
                                                
----------------------- debugLogs end: cilium-114000 [took: 5.232728591s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-114000
--- SKIP: TestNetworkPlugins/group/cilium (5.65s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-407000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-407000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.37s)

                                                
                                    
Copied to clipboard